Skytoby

深入理解Android Camera架构二-服务层

深入理解Android Camera架构二-服务层

一、概述

Camera Service被设计成一个独立进程,作为一个服务端,处理来自Camera Framework 客户端的跨进程请求,并在内部进行一定的操作,随后作为客户端将请求再一次发送至作为服务端的Camera Provider,整个流程涉及到了两个跨进程操作,前者通过AIDL机制实现,后者通过HIDL机制实现,由于在于Camera Provider通信的过程中,Service是作为客户端存在的,所以此处我们重点关注AIDL以及Camera Service 主程序的实现。

二、Camera AIDL 接口

在介绍Camera AIDL之前,不妨来简单了解下何为AIDL,谷歌为什么要实现这么一套机制?

在Android系统中,两个进程通常无法相互访问对方的内存,为了解决该问题,谷歌提出了Messager/广播以及后来的Binder,来解决这个问题,但是如果某个进程需要对另一个进程中进行多线程的并发访问,Messager和广播效果往往不是很好,所以Binder会作为主要实现方式,但是Binder的接口使用起来比较复杂,对开发者特别是初学者并不是很友好,所以为了降低跨进程开发门槛,谷歌开创性地提出了AIDL(自定义语言)机制,主动封装了Binder的实现细节,提供给开发者较为简单的使用接口,极大地提升了广大开发者的开发效率。

按照谷歌的针对AIDL机制的要求,需要服务端创建一系列*.aidl文件,并在其中定义需要提供给客户端的公共接口,并且予以实现,接下来我们来看下几个主要的aidl文件。

2.1 ICameraService.aidl

ICameraService.aidl定义了ICameraService 接口,实现主要通过CameraService类来实现,主要接口如下:

  • getNumberOfCameras: 获取系统中支持的Camera 个数
  • connectDevice():打开一个Camera 设备
  • addListener(): 添加针对Camera 设备以及闪光灯的监听对象

[->frameworks\av\camera\aidl\android\hardware\ICameraService.aidl]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
interface ICameraService
{
/**
* All camera service and device Binder calls may return a
* ServiceSpecificException with the following error codes
*/
const int ERROR_PERMISSION_DENIED = 1;
const int ERROR_ALREADY_EXISTS = 2;
const int ERROR_ILLEGAL_ARGUMENT = 3;
const int ERROR_DISCONNECTED = 4;
const int ERROR_TIMED_OUT = 5;
const int ERROR_DISABLED = 6;
const int ERROR_CAMERA_IN_USE = 7;
const int ERROR_MAX_CAMERAS_IN_USE = 8;
const int ERROR_DEPRECATED_HAL = 9;
const int ERROR_INVALID_OPERATION = 10;

/**
* Types for getNumberOfCameras
*/
const int CAMERA_TYPE_BACKWARD_COMPATIBLE = 0;
const int CAMERA_TYPE_ALL = 1;

/**
* Return the number of camera devices available in the system
*/
int getNumberOfCameras(int type);

/**
* Fetch basic camera information for a camera device
*/
CameraInfo getCameraInfo(int cameraId);

/**
* Default UID/PID values for non-privileged callers of
* connect(), connectDevice(), and connectLegacy()
*/
const int USE_CALLING_UID = -1;
const int USE_CALLING_PID = -1;

/**
* Open a camera device through the old camera API
*/
ICamera connect(ICameraClient client,
int cameraId,
String opPackageName,
int clientUid, int clientPid);

/**
* Open a camera device through the new camera API
* Only supported for device HAL versions >= 3.2
*/
ICameraDeviceUser connectDevice(ICameraDeviceCallbacks callbacks,
String cameraId,
String opPackageName,
int clientUid);

/**
* halVersion constant for connectLegacy
*/
const int CAMERA_HAL_API_VERSION_UNSPECIFIED = -1;

/**
* Open a camera device in legacy mode, if supported by the camera module HAL.
*/
ICamera connectLegacy(ICameraClient client,
int cameraId,
int halVersion,
String opPackageName,
int clientUid);

/**
* Add listener for changes to camera device and flashlight state.
*
* Also returns the set of currently-known camera IDs and state of each device.
* Adding a listener will trigger the torch status listener to fire for all
* devices that have a flash unit.
*/
CameraStatus[] addListener(ICameraServiceListener listener);

/**
* Remove listener for changes to camera device and flashlight state.
*/
void removeListener(ICameraServiceListener listener);

/**
* Read the static camera metadata for a camera device.
* Only supported for device HAL versions >= 3.2
*/
CameraMetadataNative getCameraCharacteristics(String cameraId);

/**
* Read in the vendor tag descriptors from the camera module HAL.
* Intended to be used by the native code of CameraMetadataNative to correctly
* interpret camera metadata with vendor tags.
*/
VendorTagDescriptor getCameraVendorTagDescriptor();

/**
* Retrieve the vendor tag descriptor cache which can have multiple vendor
* providers.
* Intended to be used by the native code of CameraMetadataNative to correctly
* interpret camera metadata with vendor tags.
*/
VendorTagDescriptorCache getCameraVendorTagCache();

/**
* Read the legacy camera1 parameters into a String
*/
String getLegacyParameters(int cameraId);

/**
* apiVersion constants for supportsCameraApi
*/
const int API_VERSION_1 = 1;
const int API_VERSION_2 = 2;

// Determines if a particular API version is supported directly for a cameraId.
boolean supportsCameraApi(String cameraId, int apiVersion);
// Determines if a cameraId is a hidden physical camera of a logical multi-camera.
boolean isHiddenPhysicalCamera(String cameraId);

void setTorchMode(String cameraId, boolean enabled, IBinder clientBinder);

/**
* Notify the camera service of a system event. Should only be called from system_server.
*
* Callers require the android.permission.CAMERA_SEND_SYSTEM_EVENTS permission.
*/
const int EVENT_NONE = 0;
const int EVENT_USER_SWITCHED = 1; // The argument is the set of new foreground user IDs.
oneway void notifySystemEvent(int eventId, in int[] args);

/**
* Notify the camera service of a device physical status change. May only be called from
* a privileged process.
*
* newState is a bitfield consisting of DEVICE_STATE_* values combined together. Valid state
* combinations are device-specific. At device startup, the camera service will assume the device
* state is NORMAL until otherwise notified.
*
* Callers require the android.permission.CAMERA_SEND_SYSTEM_EVENTS permission.
*/
oneway void notifyDeviceStateChange(long newState);

// Bitfield constants for notifyDeviceStateChange
// All bits >= 32 are for custom vendor states
// Written as ints since AIDL does not support long constants.
const int DEVICE_STATE_NORMAL = 0;
const int DEVICE_STATE_BACK_COVERED = 1;
const int DEVICE_STATE_FRONT_COVERED = 2;
const int DEVICE_STATE_FOLDED = 4;
const int DEVICE_STATE_LAST_FRAMEWORK_BIT = 0x80000000; // 1 << 31;
}

2.2 ICameraDeviceCallbacks.aidl

ICameraDeviceCallbacks.aidl文件中定义了ICameraDeviceCallbacks接口,其实现主要由Framework中的CameraDeviceCallbacks类进行实现,主要接口如下:

  • onResultReceived: 一旦Service收到结果数据,便会调用该接口发送至Framework
  • onCaptureStarted(): 一旦开始进行图像的采集,便调用该接口将部分信息以及时间戳上传至Framework
  • onDeviceError(): 一旦发生了错误,通过调用该接口通知Framework

[->frameworks\av\camera\aidl\android\hardware\camera2\ICameraDeviceCallbacks.aidl]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
interface ICameraDeviceCallbacks
{
// Error codes for onDeviceError
const int ERROR_CAMERA_INVALID_ERROR = -1; // To indicate all invalid error codes
const int ERROR_CAMERA_DISCONNECTED = 0;
const int ERROR_CAMERA_DEVICE = 1;
const int ERROR_CAMERA_SERVICE = 2;
const int ERROR_CAMERA_REQUEST = 3;
const int ERROR_CAMERA_RESULT = 4;
const int ERROR_CAMERA_BUFFER = 5;
const int ERROR_CAMERA_DISABLED = 6;

oneway void onDeviceError(int errorCode, in CaptureResultExtras resultExtras);
oneway void onDeviceIdle();
oneway void onCaptureStarted(in CaptureResultExtras resultExtras, long timestamp);
oneway void onResultReceived(in CameraMetadataNative result,
in CaptureResultExtras resultExtras,
in PhysicalCaptureResultInfo[] physicalCaptureResultInfos);
oneway void onPrepared(int streamId);

/**
* Repeating request encountered an error and was stopped.
*
* @param lastFrameNumber Frame number of the last frame of the streaming request.
* @param repeatingRequestId the ID of the repeating request being stopped
*/
oneway void onRepeatingRequestError(in long lastFrameNumber,
in int repeatingRequestId);
oneway void onRequestQueueEmpty();
}

2.3 ICameraDeviceUser.aidl

ICameraDeviceUser.aidl定义了ICameraDeviceUser接口,由CameraDeviceClient最终实现,主要接口如下:

  • disconnect: 关闭Camera 设备
  • submitRequestList:发送request
  • beginConfigure: 开始配置Camera 设备,需要在所有关于数据流的操作之前
  • endConfigure: 结束关于Camera 设备的配置,该接口需要在所有Request下发之前被调用
  • createDefaultRequest: 创建一个具有默认配置的Request

[->frameworks\av\camera\aidl\android\hardware\camera2\ICameraDeviceUser.aidl]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
interface ICameraDeviceUser
{
void disconnect();

const int NO_IN_FLIGHT_REPEATING_FRAMES = -1;

SubmitInfo submitRequest(in CaptureRequest request, boolean streaming);
SubmitInfo submitRequestList(in CaptureRequest[] requestList, boolean streaming);

/**
* Cancel the repeating request specified by requestId
* Returns the frame number of the last frame that will be produced from this
* repeating request, or NO_IN_FLIGHT_REPEATING_FRAMES if no frames were produced
* by this repeating request.
*
* Repeating request may be stopped by camera device due to an error. Canceling a stopped
* repeating request will trigger ERROR_ILLEGAL_ARGUMENT.
*/
long cancelRequest(int requestId);

/**
* Begin the device configuration.
*
* <p>
* beginConfigure must be called before any call to deleteStream, createStream,
* or endConfigure. It is not valid to call this when the device is not idle.
* <p>
*/
void beginConfigure();

/**
* The standard operating mode for a camera device; all API guarantees are in force
*/
const int NORMAL_MODE = 0;

/**
* High-speed recording mode; only two outputs targeting preview and video recording may be
* used, and requests must be batched.
*/
const int CONSTRAINED_HIGH_SPEED_MODE = 1;

/**
* Start of custom vendor modes
*/
const int VENDOR_MODE_START = 0x8000;

/**
* End the device configuration.
*
* <p>
* endConfigure must be called after stream configuration is complete (i.e. after
* a call to beginConfigure and subsequent createStream/deleteStream calls). This
* must be called before any requests can be submitted.
* <p>
* @param operatingMode The kind of session to create; either NORMAL_MODE or
* CONSTRAINED_HIGH_SPEED_MODE. Must be a non-negative value.
* @param sessionParams Session wide camera parameters
*/
void endConfigure(int operatingMode, in CameraMetadataNative sessionParams);

/**
* Check whether a particular session configuration has camera device
* support.
*
* @param sessionConfiguration Specific session configuration to be verified.
* @return true - in case the stream combination is supported.
* false - in case there is no device support.
*/
boolean isSessionConfigurationSupported(in SessionConfiguration sessionConfiguration);

void deleteStream(int streamId);

/**
* Create an output stream
*
* <p>Create an output stream based on the given output configuration</p>
*
* @param outputConfiguration size, format, and other parameters for the stream
* @return new stream ID
*/
int createStream(in OutputConfiguration outputConfiguration);

/**
* Create an input stream
*
* <p>Create an input stream of width, height, and format</p>
*
* @param width Width of the input buffers
* @param height Height of the input buffers
* @param format Format of the input buffers. One of HAL_PIXEL_FORMAT_*.
*
* @return new stream ID
*/
int createInputStream(int width, int height, int format);

/**
* Get the surface of the input stream.
*
* <p>It's valid to call this method only after a stream configuration is completed
* successfully and the stream configuration includes a input stream.</p>
*
* @param surface An output argument for the surface of the input stream buffer queue.
*/
Surface getInputSurface();

// Keep in sync with public API in
// frameworks/base/core/java/android/hardware/camera2/CameraDevice.java
const int TEMPLATE_PREVIEW = 1;
const int TEMPLATE_STILL_CAPTURE = 2;
const int TEMPLATE_RECORD = 3;
const int TEMPLATE_VIDEO_SNAPSHOT = 4;
const int TEMPLATE_ZERO_SHUTTER_LAG = 5;
const int TEMPLATE_MANUAL = 6;

CameraMetadataNative createDefaultRequest(int templateId);

CameraMetadataNative getCameraInfo();

void waitUntilIdle();

long flush();

void prepare(int streamId);

void tearDown(int streamId);

void prepare2(int maxCount, int streamId);

void updateOutputConfiguration(int streamId, in OutputConfiguration outputConfiguration);

void finalizeOutputConfigurations(int streamId, in OutputConfiguration outputConfiguration);
}

2.4 ICameraServiceListener.aidl

ICameraServiceListener.aidl定义了ICameraServiceListener接口,由Framework中的CameraManagerGlobal类实现,主要接口如下:

  • onStatusChanged: 用于告知当前Camera 设备的状态的变更
  • onCameraOpened: 用于告知当前Camera打开
  • onCameraClosed:用于告知当前Camera关闭

[->frameworks\av\camera\aidl\android\hardware\ICameraServiceListener.aidl]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
interface ICameraServiceListener
{

/**
* Initial status will be transmitted with onStatusChange immediately
* after this listener is added to the service listener list.
*
* Allowed transitions:
*
* (Any) -> NOT_PRESENT
* NOT_PRESENT -> PRESENT
* NOT_PRESENT -> ENUMERATING
* ENUMERATING -> PRESENT
* PRESENT -> NOT_AVAILABLE
* NOT_AVAILABLE -> PRESENT
*
* A state will never immediately transition back to itself.
*
* The enums must match the values in
* include/hardware/camera_common.h when applicable
*/
// Device physically unplugged
const int STATUS_NOT_PRESENT = 0;
// Device physically has been plugged in and the camera can be used exclusively
const int STATUS_PRESENT = 1;
// Device physically has been plugged in but it will not be connect-able until enumeration is
// complete
const int STATUS_ENUMERATING = 2;
// Camera is in use by another app and cannot be used exclusively
const int STATUS_NOT_AVAILABLE = -2;

// Use to initialize variables only
const int STATUS_UNKNOWN = -1;

oneway void onStatusChanged(int status, String cameraId);

/**
* The torch mode status of a camera.
*
* Initial status will be transmitted with onTorchStatusChanged immediately
* after this listener is added to the service listener list.
*
* The enums must match the values in
* include/hardware/camera_common.h
*/
// The camera's torch mode has become not available to use via
// setTorchMode().
const int TORCH_STATUS_NOT_AVAILABLE = 0;
// The camera's torch mode is off and available to be turned on via
// setTorchMode().
const int TORCH_STATUS_AVAILABLE_OFF = 1;
// The camera's torch mode is on and available to be turned off via
// setTorchMode().
const int TORCH_STATUS_AVAILABLE_ON = 2;

// Use to initialize variables only
const int TORCH_STATUS_UNKNOWN = -1;

oneway void onTorchStatusChanged(int status, String cameraId);

/**
* Notify registered clients about camera access priority changes.
* Clients which were previously unable to open a certain camera device
* can retry after receiving this callback.
*/
oneway void onCameraAccessPrioritiesChanged();

/**
* Notify registered clients about cameras being opened/closed.
* Only clients with android.permission.CAMERA_OPEN_CLOSE_LISTENER permission
* will receive such callbacks.
*/
oneway void onCameraOpened(String cameraId, String clientPackageId);
oneway void onCameraClosed(String cameraId);
}

三、Camera Service

Camera Service 主程序,是随着系统启动而运行,主要目的是向外暴露AIDL接口给Framework进行调用,同时通过调用Camera Provider的HIDL接口,建立与Provider的通信,并且在内部维护从Framework以及Provider获取到的资源,并且按照一定的框架结构保持整个Service在稳定高效的状态下运行,所以接下来我们主要通过初始化过程以及处理来自应用的请求来详细介绍下。

3.1 启动初始化

3.1.1 cameraserver.rc

启动依赖cameraserver.rc配置启动

[->frameworks\av\camera\cameraserver\cameraserver.rc]

1
2
3
4
5
6
7
service cameraserver /system/bin/cameraserver
class main
user cameraserver
group system audio camera input drmrpc sdcard_rw sdcard_r sdcard_all
ioprio rt 4
writepid /dev/cpuset/camera-daemon/tasks /dev/stune/foreground/tasks
rlimit rtprio 10 10

3.1.2 main_cameraserver.cpp

执行main_cameraserver中的main函数

[->frameworks\av\camera\cameraserver\main_cameraserver.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
int main(int argc __unused, char** argv __unused)
{
signal(SIGPIPE, SIG_IGN);

// Set 5 threads for HIDL calls. Now cameraserver will serve HIDL calls in
// addition to consuming them from the Camera HAL as well.
hardware::configureRpcThreadpool(5, /*willjoin*/ false);

sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
//初始化CameraService
CameraService::instantiate();
ALOGI("ServiceManager: %p done instantiate", sm.get());
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}

3.1.3 CameraService::onFirstRef

执行instantiate后会调用到CameraService的onFirstRef方法

[->frameworks\av\services\camera\libcameraservice\CameraService.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
void CameraService::onFirstRef()
{
ALOGI("CameraService process starting");

BnCameraService::onFirstRef();

// Update battery life tracking if service is restarting
BatteryNotifier& notifier(BatteryNotifier::getInstance());
notifier.noteResetCamera();
notifier.noteResetFlashlight();

status_t res = INVALID_OPERATION;

res = enumerateProviders();
if (res == OK) {
mInitialized = true;
}

mUidPolicy = new UidPolicy(this);
mUidPolicy->registerSelf();
mSensorPrivacyPolicy = new SensorPrivacyPolicy(this);
mSensorPrivacyPolicy->registerSelf();
//初始化 hidl cameraserver
sp<HidlCameraService> hcs = HidlCameraService::getInstance(this);
//用于向hwservicemanager注册IBase对象
if (hcs->registerAsService() != android::OK) {
ALOGE("%s: Failed to register default android.frameworks.cameraservice.service@1.0",
__FUNCTION__);
}

// This needs to be last call in this function, so that it's as close to
// ServiceManager::addService() as possible.
CameraService::pingCameraServiceProxy();
ALOGI("CameraService pinged cameraservice proxy");
}

3.1.4 CameraService::enumerateProviders

实例化CameraProviderManager对象,并进行初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
status_t CameraService::enumerateProviders() {
status_t res;

std::vector<std::string> deviceIds;
{
Mutex::Autolock l(mServiceLock);

if (nullptr == mCameraProviderManager.get()) {
mCameraProviderManager = new CameraProviderManager();
res = mCameraProviderManager->initialize(this);
if (res != OK) {
ALOGE("%s: Unable to initialize camera provider manager: %s (%d)",
__FUNCTION__, strerror(-res), res);
return res;
}
}


// Setup vendor tags before we call get_camera_info the first time
// because HAL might need to setup static vendor keys in get_camera_info
// TODO: maybe put this into CameraProviderManager::initialize()?
mCameraProviderManager->setUpVendorTags();

if (nullptr == mFlashlight.get()) {
mFlashlight = new CameraFlashlight(mCameraProviderManager, this);
}

res = mFlashlight->findFlashUnits();
if (res != OK) {
ALOGE("Failed to enumerate flash units: %s (%d)", strerror(-res), res);
}

deviceIds = mCameraProviderManager->getCameraDeviceIds();
}


for (auto& cameraId : deviceIds) {
String8 id8 = String8(cameraId.c_str());
if (getCameraState(id8) == nullptr) {
onDeviceStatusChanged(id8, CameraDeviceStatus::PRESENT);
}
}

return OK;
}

3.1.5 mCameraProviderManager::initialize

[->frameworks/av/services/camera/libcameraservice/common/CameraProviderManager.cpp]

CameraProviderManager初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
status_t CameraProviderManager::initialize(wp<CameraProviderManager::StatusListener> listener,
ServiceInteractionProxy* proxy) {
std::lock_guard<std::mutex> lock(mInterfaceMutex);
if (proxy == nullptr) {
ALOGE("%s: No valid service interaction proxy provided", __FUNCTION__);
return BAD_VALUE;
}
mListener = listener;
mServiceProxy = proxy;
mDeviceState = static_cast<hardware::hidl_bitfield<provider::V2_5::DeviceState>>(
provider::V2_5::DeviceState::NORMAL);

// Registering will trigger notifications for all already-known providers
bool success = mServiceProxy->registerForNotifications(
/* instance name, empty means no filter */ "",
this);
if (!success) {
ALOGE("%s: Unable to register with hardware service manager for notifications "
"about camera providers", __FUNCTION__);
return INVALID_OPERATION;
}


for (const auto& instance : mServiceProxy->listServices()) {
this->addProviderLocked(instance);
}

IPCThreadState::self()->flushCommands();

return OK;
}

3.1.6 addProviderLocked

[->frameworks/av/services/camera/libcameraservice/common/CameraProviderManager.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
status_t CameraProviderManager::addProviderLocked(const std::string& newProvider) {
for (const auto& providerInfo : mProviders) {
if (providerInfo->mProviderName == newProvider) {
ALOGW("%s: Camera provider HAL with name '%s' already registered", __FUNCTION__,
newProvider.c_str());
return ALREADY_EXISTS;
}
}

sp<provider::V2_4::ICameraProvider> interface;
//获取ICameraProvider代理
interface = mServiceProxy->getService(newProvider);

if (interface == nullptr) {
ALOGE("%s: Camera provider HAL '%s' is not actually available", __FUNCTION__,
newProvider.c_str());
return BAD_VALUE;
}
//初始化ProviderInfo
sp<ProviderInfo> providerInfo = new ProviderInfo(newProvider, this);
status_t res = providerInfo->initialize(interface, mDeviceState);
if (res != OK) {
return res;
}
//ProviderInfo添加到容器进行管理
mProviders.push_back(providerInfo);

return OK;
}

3.1.7 ProviderInfo::initialize

[->frameworks/av/services/camera/libcameraservice/common/CameraProviderManager.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
status_t CameraProviderManager::ProviderInfo::initialize(
sp<provider::V2_4::ICameraProvider>& interface,
hardware::hidl_bitfield<provider::V2_5::DeviceState> currentDeviceState) {
status_t res = parseProviderName(mProviderName, &mType, &mId);
if (res != OK) {
ALOGE("%s: Invalid provider name, ignoring", __FUNCTION__);
return BAD_VALUE;
}
ALOGI("Connecting to new camera provider: %s, isRemote? %d",
mProviderName.c_str(), interface->isRemote());

// Determine minor version
auto castResult = provider::V2_5::ICameraProvider::castFrom(interface);
if (castResult.isOk()) {
mMinorVersion = 5;
} else {
mMinorVersion = 4;
}

// cameraDeviceStatusChange callbacks may be called (and causing new devices added)
// before setCallback returns
hardware::Return<bool> linked = interface->linkToDeath(this, /*cookie*/ mId);
if (!linked.isOk()) {
ALOGE("%s: Transaction error in linking to camera provider '%s' death: %s",
__FUNCTION__, mProviderName.c_str(), linked.description().c_str());
return DEAD_OBJECT;
} else if (!linked) {
ALOGW("%s: Unable to link to provider '%s' death notifications",
__FUNCTION__, mProviderName.c_str());
}
//ICameraProvider代理保存到内部对象中
if (!kEnableLazyHal) {
// Save HAL reference indefinitely
ALOGE("Saving Interface");
mSavedInterface = interface;
} else {
mActiveInterface = interface;
}
//注册到Camera Provider中,接收来自Provider的事件回调
hardware::Return<Status> status = interface->setCallback(this);
if (!status.isOk()) {
ALOGE("%s: Transaction error setting up callbacks with camera provider '%s': %s",
__FUNCTION__, mProviderName.c_str(), status.description().c_str());
return DEAD_OBJECT;
}
if (status != Status::OK) {
ALOGE("%s: Unable to register callbacks with camera provider '%s'",
__FUNCTION__, mProviderName.c_str());
return mapToStatusT(status);
}
ALOGE("%s: Setting device state for %s: 0x%" PRIx64,
__FUNCTION__, mProviderName.c_str(), mDeviceState);
notifyDeviceStateChange(currentDeviceState);

res = setUpVendorTags();
if (res != OK) {
ALOGE("%s: Unable to set up vendor tags from provider '%s'",
__FUNCTION__, mProviderName.c_str());
return res;
}

// Get initial list of camera devices, if any
std::vector<std::string> devices;
hardware::Return<void> ret = interface->getCameraIdList([&status, this, &devices](
Status idStatus,
const hardware::hidl_vec<hardware::hidl_string>& cameraDeviceNames) {
status = idStatus;
if (status == Status::OK) {
for (auto& name : cameraDeviceNames) {
uint16_t major, minor;
std::string type, id;
status_t res = parseDeviceName(name, &major, &minor, &type, &id);
if (res != OK) {
ALOGE("%s: Error parsing deviceName: %s: %d", __FUNCTION__, name.c_str(), res);
status = Status::INTERNAL_ERROR;
} else {
devices.push_back(name);
mProviderPublicCameraIds.push_back(id);
}
}
} });
if (!ret.isOk()) {
ALOGE("%s: Transaction error in getting camera ID list from provider '%s': %s",
__FUNCTION__, mProviderName.c_str(), linked.description().c_str());
return DEAD_OBJECT;
}
if (status != Status::OK) {
ALOGE("%s: Unable to query for camera devices from provider '%s'",
__FUNCTION__, mProviderName.c_str());
return mapToStatusT(status);
}

ret = interface->isSetTorchModeSupported(
[this](auto status, bool supported) {
if (status == Status::OK) {
mSetTorchModeSupported = supported;
}
});
if (!ret.isOk()) {
ALOGE("%s: Transaction error checking torch mode support '%s': %s",
__FUNCTION__, mProviderName.c_str(), ret.description().c_str());
return DEAD_OBJECT;
}

mIsRemote = interface->isRemote();

sp<StatusListener> listener = mManager->getStatusListener();
for (auto& device : devices) {
std::string id;
status_t res = addDevice(device, common::V1_0::CameraDeviceStatus::PRESENT, &id);
if (res != OK) {
ALOGE("%s: Unable to enumerate camera device '%s': %s (%d)",
__FUNCTION__, device.c_str(), strerror(-res), res);
continue;
}
}

// Process cached status callbacks
std::unique_ptr<std::vector<CameraStatusInfoT>> cachedStatus =
std::make_unique<std::vector<CameraStatusInfoT>>();
{
std::lock_guard<std::mutex> lock(mInitLock);
for (auto& statusInfo : mCachedStatus) {
std::string id, physicalId;
status_t res = OK;
res = cameraDeviceStatusChangeLocked(&id, statusInfo.cameraId, statusInfo.status);
if (res == OK) {
cachedStatus->emplace_back(statusInfo.isPhysicalCameraStatus,
id.c_str(), physicalId.c_str(), statusInfo.status);
}
}
mCachedStatus.clear();
mInitialized = true;
}

ALOGI("Camera provider %s ready with %zu camera devices",
mProviderName.c_str(), mDevices.size());

//mInitialized = true;
return OK;
}

3.1.7 addDevice

添加device

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
status_t CameraProviderManager::ProviderInfo::addDevice(const std::string& name,
CameraDeviceStatus initialStatus, /*out*/ std::string* parsedId) {

ALOGE("Enumerating new camera device: %s", name.c_str());

uint16_t major, minor;
std::string type, id;

status_t res = parseDeviceName(name, &major, &minor, &type, &id);
if (res != OK) {
ALOGE("Parse Failed");
return res;
}
if (type != mType) {
ALOGE("%s: Device type %s does not match provider type %s", __FUNCTION__,
type.c_str(), mType.c_str());
return BAD_VALUE;
}
if (mManager->isValidDeviceLocked(id, major)) {
ALOGE("%s: Device %s: ID %s is already in use for device major version %d", __FUNCTION__,
name.c_str(), id.c_str(), major);
return BAD_VALUE;
}

std::unique_ptr<DeviceInfo> deviceInfo;
ALOGE("major %d", major);
switch (major) {
case 1:
deviceInfo = initializeDeviceInfo<DeviceInfo1>(name, mProviderTagid,
id, minor);
break;
case 3:
deviceInfo = initializeDeviceInfo<DeviceInfo3>(name, mProviderTagid,
id, minor);
break;
default:
ALOGE("%s: Device %s: Unknown HIDL device HAL major version %d:", __FUNCTION__,
name.c_str(), major);
return BAD_VALUE;
}
if (deviceInfo == nullptr)
{
ALOGE("devinfo is NULL");
return BAD_VALUE;
}
deviceInfo->mStatus = initialStatus;
bool isAPI1Compatible = deviceInfo->isAPI1Compatible();
ALOGE("Adding the device");
//DeviceInfo3存入容器进行统一管理
mDevices.push_back(std::move(deviceInfo));

mUniqueCameraIds.insert(id);
if (isAPI1Compatible) {
// addDevice can be called more than once for the same camera id if HAL
// supports openLegacy.
if (std::find(mUniqueAPI1CompatibleCameraIds.begin(), mUniqueAPI1CompatibleCameraIds.end(),
id) == mUniqueAPI1CompatibleCameraIds.end()) {
mUniqueAPI1CompatibleCameraIds.push_back(id);
}
}

if (parsedId != nullptr) {
*parsedId = id;
}
return OK;
}
3.1.7.1 initializeDeviceInfo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
template<class DeviceInfoT>
std::unique_ptr<CameraProviderManager::ProviderInfo::DeviceInfo>
CameraProviderManager::ProviderInfo::initializeDeviceInfo(
const std::string &name, const metadata_vendor_id_t tagId,
const std::string &id, uint16_t minorVersion) {
Status status;

auto cameraInterface =
startDeviceInterface<typename DeviceInfoT::InterfaceT>(name);
if (cameraInterface == nullptr) return nullptr;

CameraResourceCost resourceCost;
cameraInterface->getResourceCost([&status, &resourceCost](
Status s, CameraResourceCost cost) {
status = s;
resourceCost = cost;
});
if (status != Status::OK) {
ALOGE("%s: Unable to obtain resource costs for camera device %s: %s", __FUNCTION__,
name.c_str(), statusToString(status));
return nullptr;
}

for (auto& conflictName : resourceCost.conflictingDevices) {
uint16_t major, minor;
std::string type, id;
status_t res = parseDeviceName(conflictName, &major, &minor, &type, &id);
if (res != OK) {
ALOGE("%s: Failed to parse conflicting device %s", __FUNCTION__, conflictName.c_str());
return nullptr;
}
conflictName = id;
}

return std::unique_ptr<DeviceInfo>(
new DeviceInfoT(name, tagId, id, minorVersion, resourceCost, this,
mProviderPublicCameraIds, cameraInterface));
}
3.1.7.2 startDeviceInterface
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
template<>
sp<device::V3_2::ICameraDevice>
CameraProviderManager::ProviderInfo::startDeviceInterface
<device::V3_2::ICameraDevice>(const std::string &name) {
Status status;
sp<device::V3_2::ICameraDevice> cameraInterface;
hardware::Return<void> ret;
const sp<provider::V2_4::ICameraProvider> interface = startProviderInterface();
if (interface == nullptr) {
return nullptr;
}
//获取Provider端的ICameraDevice代理
ret = interface->getCameraDeviceInterface_V3_x(name, [&status, &cameraInterface](
Status s, sp<device::V3_2::ICameraDevice> interface) {
status = s;
cameraInterface = interface;
});
if (!ret.isOk()) {
ALOGE("%s: Transaction error trying to obtain interface for camera device %s: %s",
__FUNCTION__, name.c_str(), ret.description().c_str());
return nullptr;
}
if (status != Status::OK) {
ALOGE("%s: Unable to obtain interface for camera device %s: %s", __FUNCTION__,
name.c_str(), statusToString(status));
return nullptr;
}
return cameraInterface;
}
3.1.7.3 new DeviceInfo3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
CameraProviderManager::ProviderInfo::DeviceInfo3::DeviceInfo3(const std::string& name,
const metadata_vendor_id_t tagId, const std::string &id,
uint16_t minorVersion,
const CameraResourceCost& resourceCost,
sp<ProviderInfo> parentProvider,
const std::vector<std::string>& publicCameraIds,
sp<InterfaceT> interface) :
DeviceInfo(name, tagId, id, hardware::hidl_version{3, minorVersion},
publicCameraIds, resourceCost, parentProvider) {
// Get camera characteristics and initialize flash unit availability
Status status;
hardware::Return<void> ret;
ret = interface->getCameraCharacteristics([&status, this](Status s,
device::V3_2::CameraMetadata metadata) {
status = s;
if (s == Status::OK) {
camera_metadata_t *buffer =
reinterpret_cast<camera_metadata_t*>(metadata.data());
size_t expectedSize = metadata.size();
int res = validate_camera_metadata_structure(buffer, &expectedSize);
if (res == OK || res == CAMERA_METADATA_VALIDATION_SHIFTED) {
set_camera_metadata_vendor_id(buffer, mProviderTagid);
mCameraCharacteristics = buffer;
} else {
ALOGE("%s: Malformed camera metadata received from HAL", __FUNCTION__);
status = Status::INTERNAL_ERROR;
}
}
});
if (!ret.isOk()) {
ALOGE("%s: Transaction error getting camera characteristics for device %s"
" to check for a flash unit: %s", __FUNCTION__, id.c_str(),
ret.description().c_str());
return;
}
if (status != Status::OK) {
ALOGE("%s: Unable to get camera characteristics for device %s: %s (%d)",
__FUNCTION__, id.c_str(), CameraProviderManager::statusToString(status), status);
return;
}

mIsPublicallyHiddenSecureCamera = isPublicallyHiddenSecureCamera();

status_t res = fixupMonochromeTags();
if (OK != res) {
ALOGE("%s: Unable to fix up monochrome tags based for older HAL version: %s (%d)",
__FUNCTION__, strerror(-res), res);
return;
}
auto stat = addDynamicDepthTags();
if (OK != stat) {
ALOGE("%s: Failed appending dynamic depth tags: %s (%d)", __FUNCTION__, strerror(-stat),
stat);
}
res = deriveHeicTags();
if (OK != res) {
ALOGE("%s: Unable to derive HEIC tags based on camera and media capabilities: %s (%d)",
__FUNCTION__, strerror(-res), res);
}

camera_metadata_entry flashAvailable =
mCameraCharacteristics.find(ANDROID_FLASH_INFO_AVAILABLE);
if (flashAvailable.count == 1 &&
flashAvailable.data.u8[0] == ANDROID_FLASH_INFO_AVAILABLE_TRUE) {
mHasFlashUnit = true;
} else {
mHasFlashUnit = false;
}

queryPhysicalCameraIds();

// Get physical camera characteristics if applicable
auto castResult = device::V3_5::ICameraDevice::castFrom(interface);
if (!castResult.isOk()) {
ALOGV("%s: Unable to convert ICameraDevice instance to version 3.5", __FUNCTION__);
return;
}
sp<device::V3_5::ICameraDevice> interface_3_5 = castResult;
if (interface_3_5 == nullptr) {
ALOGE("%s: Converted ICameraDevice instance to nullptr", __FUNCTION__);
return;
}

if (mIsLogicalCamera) {
for (auto& id : mPhysicalIds) {
if (std::find(mPublicCameraIds.begin(), mPublicCameraIds.end(), id) !=
mPublicCameraIds.end()) {
continue;
}

hardware::hidl_string hidlId(id);
ret = interface_3_5->getPhysicalCameraCharacteristics(hidlId,
[&status, &id, this](Status s, device::V3_2::CameraMetadata metadata) {
status = s;
if (s == Status::OK) {
camera_metadata_t *buffer =
reinterpret_cast<camera_metadata_t*>(metadata.data());
size_t expectedSize = metadata.size();
int res = validate_camera_metadata_structure(buffer, &expectedSize);
if (res == OK || res == CAMERA_METADATA_VALIDATION_SHIFTED) {
set_camera_metadata_vendor_id(buffer, mProviderTagid);
mPhysicalCameraCharacteristics[id] = buffer;
} else {
ALOGE("%s: Malformed camera metadata received from HAL", __FUNCTION__);
status = Status::INTERNAL_ERROR;
}
}
});

if (!ret.isOk()) {
ALOGE("%s: Transaction error getting physical camera %s characteristics for %s: %s",
__FUNCTION__, id.c_str(), id.c_str(), ret.description().c_str());
return;
}
if (status != Status::OK) {
ALOGE("%s: Unable to get physical camera %s characteristics for device %s: %s (%d)",
__FUNCTION__, id.c_str(), mId.c_str(),
CameraProviderManager::statusToString(status), status);
return;
}
}
}

if (!kEnableLazyHal) {
// Save HAL reference indefinitely
mSavedInterface = interface;
}
}

3.1.8 小结

cameraserverstart

当系统启动的时候会首先运行main_cameraserver程序,紧接着调用了CameraService的instantiate方法,该方法最终会调用到CameraService的onFirstRef方法,在这个方法里面便开始了整个CameraService的初始化工作。

而在onFirstRef方法内又调用了enumerateProviders方法,该方法中主要做了两个工作:

  • 一个是实例化一个CameraProviderManager对象,该对象管理着有关Camera Provider的一些资源。
  • 一个是调用CameraProviderManager的initialize方法对其进行初始化工作。

而在CameraProviderManager初始化的过程中,主要做了三件事:

  • 首先通过getService方法获取ICameraProvider代理。
  • 随后实例化了一个ProviderInfo对象,之后调用其initialize方法进行初始化。
  • 最后将ProviderInfo加入到一个内部容器中进行管理。

而在调用ProviderInfo的initialize方法进行初始化过程中存在如下几个动作:

  • 首先接收了来自CameraProviderManager获取的ICameraProvider代理并将其存入内部成员变量中。
  • 其次由于ProviderInfo实现了ICameraProviderCallback接口,所以紧接着调用了ICameraProvider的setCallback将自身注册到Camera Provider中,接收来自Provider的事件回调。
  • 再然后,通过调用ICameraProvider代理的getCameraDeviceInterface_V3_X接口,获取Provider端的ICameraDevice代理,并且将这个代理作为参数加入到DeviceInfo3对象实例化方法中,而在实例化DeviceInfo3对象的过程中会通过ICameraDevice代理的getCameraCharacteristics方法获取该设备对应的属性配置,并且保存在内部成员变量中。
  • 最后ProviderInfo会将每一个DeviceInfo3存入内部的一个容器中进行统一管理,至此整个初始化的工作已经完成。

通过以上的系列动作,Camera Service进程便运行起来了,获取了Camera Provider的代理,同时也将自身关于Camera Provider的回调注册到了Provider中,这就建立了与Provider的通讯,另一边,通过服务的形式将AIDL接口也暴露给了Framework,静静等待来自Framework的请求。

3.2 处理应用请求

一旦用户打开了相机应用,便会去调用CameraManager的openCamera方法进而走到Framework层处理,Framework通过内部处理,最终将请求下发到Camera Service中,而在Camera Service主要做了获取相机设备属性、打开相机设备,然后App通过返回的相机设备,再次下发创建Session以及下发Request的操作,接下来我们来简单梳理下这一系列请求在Camera Service中是怎么进行处理的。

3.2.1 获取属性

对于获取相机设备属性动作,逻辑比较简单,由于在Camera Service启动初始化的时候已经获取了相应相机设备的属性配置,并存储在DeviceInfo3中,所以该方法就是从对应的DeviceInfo3中取出属性返回即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
status_t CameraProviderManager::getCameraCharacteristics(const std::string &id,
CameraMetadata* characteristics) const {
std::lock_guard<std::mutex> lock(mInterfaceMutex);
return getCameraCharacteristicsLocked(id, characteristics);
}

status_t CameraProviderManager::getCameraCharacteristicsLocked(const std::string &id,
CameraMetadata* characteristics) const {
auto deviceInfo = findDeviceInfoLocked(id, /*minVersion*/ {3,0}, /*maxVersion*/ {5,0});
if (deviceInfo != nullptr) {
return deviceInfo->getCameraCharacteristics(characteristics);
}

// Find hidden physical camera characteristics
for (auto& provider : mProviders) {
for (auto& deviceInfo : provider->mDevices) {
status_t res = deviceInfo->getPhysicalCameraCharacteristics(id, characteristics);
if (res != NAME_NOT_FOUND) return res;
}
}

return NAME_NOT_FOUND;
}

CameraProviderManager::ProviderInfo::DeviceInfo* CameraProviderManager::findDeviceInfoLocked(
const std::string& id,
hardware::hidl_version minVersion, hardware::hidl_version maxVersion) const {
for (auto& provider : mProviders) {
for (auto& deviceInfo : provider->mDevices) {
if (deviceInfo->mId == id &&
minVersion <= deviceInfo->mVersion && maxVersion >= deviceInfo->mVersion) {
return deviceInfo.get();
}
}
}
return nullptr;
}

3.2.2 打开相机

对于打开相机设备动作,主要由connectDevice来实现(详细流程见深入理解Camera架构一),内部实现比较复杂,下面详细看下

3.2.2.1 CameraService::connectDevice
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Status CameraService::connectDevice(
const sp<hardware::camera2::ICameraDeviceCallbacks>& cameraCb,
const String16& cameraId,
const String16& clientPackageName,
int clientUid,
/*out*/
sp<hardware::camera2::ICameraDeviceUser>* device) {

ATRACE_CALL();
Status ret = Status::ok();
String8 id = String8(cameraId);
sp<CameraDeviceClient> client = nullptr;
String16 clientPackageNameAdj = clientPackageName;
if (hardware::IPCThreadState::self()->isServingCall()) {
std::string vendorClient =
StringPrintf("vendor.client.pid<%d>", CameraThreadState::getCallingPid());
clientPackageNameAdj = String16(vendorClient.c_str());
}
//创建CameraDeviceClient
ret = connectHelper<hardware::camera2::ICameraDeviceCallbacks,CameraDeviceClient>(cameraCb, id,
/*api1CameraId*/-1,
CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageNameAdj,
clientUid, USE_CALLING_PID, API_2, /*shimUpdateOnly*/ false, /*out*/client);

if(!ret.isOk()) {
logRejected(id, CameraThreadState::getCallingPid(), String8(clientPackageNameAdj),
ret.toString8());
return ret;
}
//返回client给framework
*device = client;
return ret;
}
3.2.2.2 CameraService::connectHelper
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
Status CameraService::connectHelper(const sp<CALLBACK>& cameraCb, const String8& cameraId,
int api1CameraId, int halVersion, const String16& clientPackageName, int clientUid,
int clientPid, apiLevel effectiveApiLevel, bool shimUpdateOnly,
/*out*/sp<CLIENT>& device) {
binder::Status ret = binder::Status::ok();

String8 clientName8(clientPackageName);

int originalClientPid = 0;

ALOGI("CameraService::connect call (PID %d \"%s\", camera ID %s) for HAL version %s and "
"Camera API version %d", clientPid, clientName8.string(), cameraId.string(),
(halVersion == -1) ? "default" : std::to_string(halVersion).c_str(),
static_cast<int>(effectiveApiLevel));

if (shouldRejectHiddenCameraConnection(cameraId)) {
ALOGW("Attempting to connect to system-only camera id %s, connection rejected",
cameraId.c_str());
return STATUS_ERROR_FMT(ERROR_DISCONNECTED,
"No camera device with ID \"%s\" currently available",
cameraId.string());

}
sp<CLIENT> client = nullptr;
{
// Acquire mServiceLock and prevent other clients from connecting
std::unique_ptr<AutoConditionLock> lock =
AutoConditionLock::waitAndAcquire(mServiceLockWrapper, DEFAULT_CONNECT_TIMEOUT_NS);

if (lock == nullptr) {
ALOGE("CameraService::connect (PID %d) rejected (too many other clients connecting)."
, clientPid);
return STATUS_ERROR_FMT(ERROR_MAX_CAMERAS_IN_USE,
"Cannot open camera %s for \"%s\" (PID %d): Too many other clients connecting",
cameraId.string(), clientName8.string(), clientPid);
}

// Enforce client permissions and do basic sanity checks
if(!(ret = validateConnectLocked(cameraId, clientName8,
/*inout*/clientUid, /*inout*/clientPid, /*out*/originalClientPid)).isOk()) {
return ret;
}

// Check the shim parameters after acquiring lock, if they have already been updated and
// we were doing a shim update, return immediately
if (shimUpdateOnly) {
auto cameraState = getCameraState(cameraId);
if (cameraState != nullptr) {
if (!cameraState->getShimParams().isEmpty()) return ret;
}
}

status_t err;

sp<BasicClient> clientTmp = nullptr;
std::shared_ptr<resource_policy::ClientDescriptor<String8, sp<BasicClient>>> partial;
if ((err = handleEvictionsLocked(cameraId, originalClientPid, effectiveApiLevel,
IInterface::asBinder(cameraCb), clientName8, /*out*/&clientTmp,
/*out*/&partial)) != NO_ERROR) {
switch (err) {
case -ENODEV:
return STATUS_ERROR_FMT(ERROR_DISCONNECTED,
"No camera device with ID \"%s\" currently available",
cameraId.string());
case -EBUSY:
return STATUS_ERROR_FMT(ERROR_CAMERA_IN_USE,
"Higher-priority client using camera, ID \"%s\" currently unavailable",
cameraId.string());
default:
return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,
"Unexpected error %s (%d) opening camera \"%s\"",
strerror(-err), err, cameraId.string());
}
}

if (clientTmp.get() != nullptr) {
// Handle special case for API1 MediaRecorder where the existing client is returned
device = static_cast<CLIENT*>(clientTmp.get());
return ret;
}

// give flashlight a chance to close devices if necessary.
mFlashlight->prepareDeviceOpen(cameraId);

int facing = -1;
int deviceVersion = getDeviceVersion(cameraId, /*out*/&facing);
if (facing == -1) {
ALOGE("%s: Unable to get camera device \"%s\" facing", __FUNCTION__, cameraId.string());
return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,
"Unable to get camera device \"%s\" facing", cameraId.string());
}

sp<BasicClient> tmp = nullptr;
if(!(ret = makeClient(this, cameraCb, clientPackageName,
cameraId, api1CameraId, facing,
clientPid, clientUid, getpid(),
halVersion, deviceVersion, effectiveApiLevel,
/*out*/&tmp)).isOk()) {
return ret;
}
client = static_cast<CLIENT*>(tmp.get());

LOG_ALWAYS_FATAL_IF(client.get() == nullptr, "%s: CameraService in invalid state",
__FUNCTION__);

err = client->initialize(mCameraProviderManager, mMonitorTags);
if (err != OK) {
ALOGE("%s: Could not initialize client from HAL.", __FUNCTION__);
// Errors could be from the HAL module open call or from AppOpsManager
switch(err) {
case BAD_VALUE:
return STATUS_ERROR_FMT(ERROR_ILLEGAL_ARGUMENT,
"Illegal argument to HAL module for camera \"%s\"", cameraId.string());
case -EBUSY:
return STATUS_ERROR_FMT(ERROR_CAMERA_IN_USE,
"Camera \"%s\" is already open", cameraId.string());
case -EUSERS:
return STATUS_ERROR_FMT(ERROR_MAX_CAMERAS_IN_USE,
"Too many cameras already open, cannot open camera \"%s\"",
cameraId.string());
case PERMISSION_DENIED:
return STATUS_ERROR_FMT(ERROR_PERMISSION_DENIED,
"No permission to open camera \"%s\"", cameraId.string());
case -EACCES:
return STATUS_ERROR_FMT(ERROR_DISABLED,
"Camera \"%s\" disabled by policy", cameraId.string());
case -ENODEV:
default:
return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,
"Failed to initialize camera \"%s\": %s (%d)", cameraId.string(),
strerror(-err), err);
}
}

// Update shim paremeters for legacy clients
if (effectiveApiLevel == API_1) {
// Assume we have always received a Client subclass for API1
sp<Client> shimClient = reinterpret_cast<Client*>(client.get());
String8 rawParams = shimClient->getParameters();
CameraParameters params(rawParams);

auto cameraState = getCameraState(cameraId);
if (cameraState != nullptr) {
cameraState->setShimParams(params);
} else {
ALOGE("%s: Cannot update shim parameters for camera %s, no such device exists.",
__FUNCTION__, cameraId.string());
}
}

if (shimUpdateOnly) {
// If only updating legacy shim parameters, immediately disconnect client
mServiceLock.unlock();
client->disconnect();
mServiceLock.lock();
} else {
// Otherwise, add client to active clients list
finishConnectLocked(client, partial);
}
} // lock is destroyed, allow further connect calls

// Important: release the mutex here so the client can call back into the service from its
// destructor (can be at the end of the call)
device = client;
return ret;
}
3.2.2.3 CameraService::makeClient
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
Status CameraService::makeClient(const sp<CameraService>& cameraService,
const sp<IInterface>& cameraCb, const String16& packageName, const String8& cameraId,
int api1CameraId, int facing, int clientPid, uid_t clientUid, int servicePid,
int halVersion, int deviceVersion, apiLevel effectiveApiLevel,
/*out*/sp<BasicClient>* client) {

if (halVersion < 0 || halVersion == deviceVersion) {
// Default path: HAL version is unspecified by caller, create CameraClient
// based on device version reported by the HAL.
switch(deviceVersion) {
case CAMERA_DEVICE_API_VERSION_1_0:
if (effectiveApiLevel == API_1) { // Camera1 API route
sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
*client = new CameraClient(cameraService, tmp, packageName,
api1CameraId, facing, clientPid, clientUid,
getpid());
} else { // Camera2 API route
ALOGW("Camera using old HAL version: %d", deviceVersion);
return STATUS_ERROR_FMT(ERROR_DEPRECATED_HAL,
"Camera device \"%s\" HAL version %d does not support camera2 API",
cameraId.string(), deviceVersion);
}
break;
case CAMERA_DEVICE_API_VERSION_3_0:
case CAMERA_DEVICE_API_VERSION_3_1:
case CAMERA_DEVICE_API_VERSION_3_2:
case CAMERA_DEVICE_API_VERSION_3_3:
case CAMERA_DEVICE_API_VERSION_3_4:
case CAMERA_DEVICE_API_VERSION_3_5:
if (effectiveApiLevel == API_1) { // Camera1 API route
sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
*client = new Camera2Client(cameraService, tmp, packageName,
cameraId, api1CameraId,
facing, clientPid, clientUid,
servicePid);
} else { // Camera2 API route
sp<hardware::camera2::ICameraDeviceCallbacks> tmp =
static_cast<hardware::camera2::ICameraDeviceCallbacks*>(cameraCb.get());
*client = new CameraDeviceClient(cameraService, tmp, packageName, cameraId,
facing, clientPid, clientUid, servicePid);
}
break;
default:
// Should not be reachable
ALOGE("Unknown camera device HAL version: %d", deviceVersion);
return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,
"Camera device \"%s\" has unknown HAL version %d",
cameraId.string(), deviceVersion);
}
} else {
// A particular HAL version is requested by caller. Create CameraClient
// based on the requested HAL version.
if (deviceVersion > CAMERA_DEVICE_API_VERSION_1_0 &&
halVersion == CAMERA_DEVICE_API_VERSION_1_0) {
// Only support higher HAL version device opened as HAL1.0 device.
sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
*client = new CameraClient(cameraService, tmp, packageName,
api1CameraId, facing, clientPid, clientUid,
servicePid);
} else {
// Other combinations (e.g. HAL3.x open as HAL2.x) are not supported yet.
ALOGE("Invalid camera HAL version %x: HAL %x device can only be"
" opened as HAL %x device", halVersion, deviceVersion,
CAMERA_DEVICE_API_VERSION_1_0);
return STATUS_ERROR_FMT(ERROR_ILLEGAL_ARGUMENT,
"Camera device \"%s\" (HAL version %d) cannot be opened as HAL version %d",
cameraId.string(), deviceVersion, halVersion);
}
}
return Status::ok();
}
  • new CameraDeviceClient

​ [->frameworks\av\services\camera\libcameraservice\api2\CameraDeviceClient.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
// Interface used by CameraService

CameraDeviceClient::CameraDeviceClient(const sp<CameraService>& cameraService,
const sp<hardware::camera2::ICameraDeviceCallbacks>& remoteCallback,
const String16& clientPackageName,
const String8& cameraId,
int cameraFacing,
int clientPid,
uid_t clientUid,
int servicePid) :
Camera2ClientBase(cameraService, remoteCallback, clientPackageName,
cameraId, /*API1 camera ID*/ -1,
cameraFacing, clientPid, clientUid, servicePid),
mInputStream(),
mStreamingRequestId(REQUEST_ID_NONE),
mRequestIdCounter(0),
mPrivilegedClient(false) {

char value[PROPERTY_VALUE_MAX];
property_get("persist.vendor.camera.privapp.list", value, "");
String16 packagelist(value);
if (packagelist.contains(clientPackageName.string())) {
mPrivilegedClient = true;
}

ATRACE_CALL();
ALOGI("CameraDeviceClient %s: Opened", cameraId.string());
}

// Interface used by CameraService
template <typename TClientBase>
Camera2ClientBase<TClientBase>::Camera2ClientBase(
const sp<CameraService>& cameraService,
const sp<TCamCallbacks>& remoteCallback,
const String16& clientPackageName,
const String8& cameraId,
int api1CameraId,
int cameraFacing,
int clientPid,
uid_t clientUid,
int servicePid):
TClientBase(cameraService, remoteCallback, clientPackageName,
cameraId, api1CameraId, cameraFacing, clientPid, clientUid, servicePid),
mSharedCameraCallbacks(remoteCallback),
mDeviceVersion(cameraService->getDeviceVersion(TClientBase::mCameraIdStr)),
mDevice(new Camera3Device(cameraId, clientPackageName)), //实例化Camera3Device
mDeviceActive(false), mApi1CameraId(api1CameraId)
{
ALOGI("Camera %s: Opened. Client: %s (PID %d, UID %d)", cameraId.string(),
String8(clientPackageName).string(), clientPid, clientUid);

mInitialClientPid = clientPid;
LOG_ALWAYS_FATAL_IF(mDevice == 0, "Device should never be NULL here.");
}
  • new Camera3Device

    [->frameworks\av\services\camera\libcameraservice\Camera3Device.cpp]

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    Camera3Device::Camera3Device(const String8 &id, const String16& clientPackageName):
    sizePerFace(3),
    faceNumPerGroup(10),
    sizePerFaceGroup(sizePerFace * faceNumPerGroup),
    instaTagBase(0),
    instaTagSection("org.qti.camera.intro"),
    instaFirstTagName("instaInputMetadata"),
    inputMetadata(9),
    mId(id),
    mOperatingMode(NO_MODE),
    mIsConstrainedHighSpeedConfiguration(false),
    mStatus(STATUS_UNINITIALIZED),
    mStatusWaiters(0),
    mUsePartialResult(false),
    mNumPartialResults(1),
    mTimestampOffset(0),
    mNextResultFrameNumber(0),
    mNextReprocessResultFrameNumber(0),
    mNextZslStillResultFrameNumber(0),
    mNextShutterFrameNumber(0),
    mNextReprocessShutterFrameNumber(0),
    mNextZslStillShutterFrameNumber(0),
    mListener(NULL),
    mVendorTagId(CAMERA_METADATA_INVALID_VENDOR_ID),
    mLastTemplateId(-1),
    mNeedFixupMonochromeTags(false)
    {
    ATRACE_CALL();
    packge_name = clientPackageName;
    if (!strcmp(String8(packge_name).string(), "com.alibaba.dingtalk.focus")) {
    property_set("vendor.select.mulicamera", "1");
    }
    if (!strcmp(String8(packge_name).string(), "com.tencent.wemeet.rooms")) {
    property_set("vendor.select.mulicamera", "1");
    }
    if (!strcmp(String8(packge_name).string(), "com.ss.meetx.room")) {
    property_set("vendor.select.mulicamera", "1");
    }
    isLogicCam = mId == "3";
    char value[PROPERTY_VALUE_MAX];
    property_get("vendor.select.mulicamera", value, "0");
    bool enableMultiCam = atoi(value) == 1;
    if (enableMultiCam && !isLogicCam) {
    isLogicCam = (mId == "0") || (mId == "1");
    }
    ALOGD("%s: %s created device for camera %s", __FUNCTION__, String8(clientPackageName).c_str(), mId.string());
    }
3.2.2.4 CameraDeviceClient::initialize
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
status_t CameraDeviceClient::initialize(sp<CameraProviderManager> manager,
const String8& monitorTags) {
return initializeImpl(manager, monitorTags);
}

template<typename TProviderPtr>
status_t CameraDeviceClient::initializeImpl(TProviderPtr providerPtr, const String8& monitorTags) {
ATRACE_CALL();
status_t res;

res = Camera2ClientBase::initialize(providerPtr, monitorTags);
if (res != OK) {
return res;
}

String8 threadName;
mFrameProcessor = new FrameProcessorBase(mDevice);
threadName = String8::format("CDU-%s-FrameProc", mCameraIdStr.string());
mFrameProcessor->run(threadName.string());

mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,
FRAME_PROCESSOR_LISTENER_MAX_ID,
/*listener*/this,
/*sendPartials*/true);

auto deviceInfo = mDevice->info();
camera_metadata_entry_t physicalKeysEntry = deviceInfo.find(
ANDROID_REQUEST_AVAILABLE_PHYSICAL_CAMERA_REQUEST_KEYS);
if (physicalKeysEntry.count > 0) {
mSupportedPhysicalRequestKeys.insert(mSupportedPhysicalRequestKeys.begin(),
physicalKeysEntry.data.i32,
physicalKeysEntry.data.i32 + physicalKeysEntry.count);
}

mProviderManager = providerPtr;
return OK;
}
  • Camera2ClientBase::initialize
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
template <typename TClientBase>
status_t Camera2ClientBase<TClientBase>::initialize(sp<CameraProviderManager> manager,
const String8& monitorTags) {
return initializeImpl(manager, monitorTags);
}

template <typename TClientBase>
template <typename TProviderPtr>
status_t Camera2ClientBase<TClientBase>::initializeImpl(TProviderPtr providerPtr,
const String8& monitorTags) {
ATRACE_CALL();
ALOGV("%s: Initializing client for camera %s", __FUNCTION__,
TClientBase::mCameraIdStr.string());
status_t res;

// Verify ops permissions
res = TClientBase::startCameraOps();
if (res != OK) {
return res;
}

if (mDevice == NULL) {
ALOGE("%s: Camera %s: No device connected",
__FUNCTION__, TClientBase::mCameraIdStr.string());
return NO_INIT;
}

res = mDevice->initialize(providerPtr, monitorTags);
if (res != OK) {
ALOGE("%s: Camera %s: unable to initialize device: %s (%d)",
__FUNCTION__, TClientBase::mCameraIdStr.string(), strerror(-res), res);
return res;
}

wp<CameraDeviceBase::NotificationListener> weakThis(this);
res = mDevice->setNotifyCallback(weakThis);

return OK;
}
  • new FrameProcessorBase

    [->frameworks\av\services\camera\libcameraservice\common\FrameProcessorBase.cpp]

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    FrameProcessorBase::FrameProcessorBase(wp<CameraDeviceBase> device) :
    Thread(/*canCallJava*/false),
    mDevice(device),
    mNumPartialResults(1) {
    sp<CameraDeviceBase> cameraDevice = device.promote();
    if (cameraDevice != 0) {
    CameraMetadata staticInfo = cameraDevice->info();
    camera_metadata_entry_t entry = staticInfo.find(ANDROID_REQUEST_PARTIAL_RESULT_COUNT);
    if (entry.count > 0) {
    mNumPartialResults = entry.data.i32[0];
    }
    }
    }
  • Camera3Device::initialize

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    status_t Camera3Device::initialize(sp<CameraProviderManager> manager, const String8& monitorTags) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    ALOGD("%s: Initializing HIDL device for camera %s", __FUNCTION__, mId.string());
    if (mStatus != STATUS_UNINITIALIZED) {
    CLOGE("Already initialized!");
    return INVALID_OPERATION;
    }
    if (manager == nullptr) return INVALID_OPERATION;

    sp<ICameraDeviceSession> session;
    ATRACE_BEGIN("CameraHal::openSession");
    //获取ICameraDeviceSession代理
    status_t res = manager->openSession(mId.string(), this,
    /*out*/ &session);
    ATRACE_END();
    if (res != OK) {
    SET_ERR_L("Could not open camera session: %s (%d)", strerror(-res), res);
    return res;
    }

    res = manager->getCameraCharacteristics(mId.string(), &mDeviceInfo);
    if (res != OK) {
    SET_ERR_L("Could not retrieve camera characteristics: %s (%d)", strerror(-res), res);
    session->close();
    return res;
    }

    std::vector<std::string> physicalCameraIds;
    bool isLogical = manager->isLogicalCamera(mId.string(), &physicalCameraIds);
    if (isLogical) {
    for (auto& physicalId : physicalCameraIds) {
    res = manager->getCameraCharacteristics(
    physicalId, &mPhysicalDeviceInfoMap[physicalId]);
    if (res != OK) {
    SET_ERR_L("Could not retrieve camera %s characteristics: %s (%d)",
    physicalId.c_str(), strerror(-res), res);
    session->close();
    return res;
    }

    if (DistortionMapper::isDistortionSupported(mPhysicalDeviceInfoMap[physicalId])) {
    mDistortionMappers[physicalId].setupStaticInfo(mPhysicalDeviceInfoMap[physicalId]);
    if (res != OK) {
    SET_ERR_L("Unable to read camera %s's calibration fields for distortion "
    "correction", physicalId.c_str());
    session->close();
    return res;
    }
    }
    }
    }

    std::shared_ptr<RequestMetadataQueue> queue;
    auto requestQueueRet = session->getCaptureRequestMetadataQueue(
    [&queue](const auto& descriptor) {
    queue = std::make_shared<RequestMetadataQueue>(descriptor);
    if (!queue->isValid() || queue->availableToWrite() <= 0) {
    ALOGE("HAL returns empty request metadata fmq, not use it");
    queue = nullptr;
    // don't use the queue onwards.
    }
    });
    if (!requestQueueRet.isOk()) {
    ALOGE("Transaction error when getting request metadata fmq: %s, not use it",
    requestQueueRet.description().c_str());
    return DEAD_OBJECT;
    }

    std::unique_ptr<ResultMetadataQueue>& resQueue = mResultMetadataQueue;
    auto resultQueueRet = session->getCaptureResultMetadataQueue(
    [&resQueue](const auto& descriptor) {
    resQueue = std::make_unique<ResultMetadataQueue>(descriptor);
    if (!resQueue->isValid() || resQueue->availableToWrite() <= 0) {
    ALOGE("HAL returns empty result metadata fmq, not use it");
    resQueue = nullptr;
    // Don't use the resQueue onwards.
    }
    });
    if (!resultQueueRet.isOk()) {
    ALOGE("Transaction error when getting result metadata queue from camera session: %s",
    resultQueueRet.description().c_str());
    return DEAD_OBJECT;
    }
    IF_ALOGV() {
    session->interfaceChain([](
    ::android::hardware::hidl_vec<::android::hardware::hidl_string> interfaceChain) {
    ALOGV("Session interface chain:");
    for (const auto& iface : interfaceChain) {
    ALOGV(" %s", iface.c_str());
    }
    });
    }

    camera_metadata_entry bufMgrMode =
    mDeviceInfo.find(ANDROID_INFO_SUPPORTED_BUFFER_MANAGEMENT_VERSION);
    if (bufMgrMode.count > 0) {
    mUseHalBufManager = (bufMgrMode.data.u8[0] ==
    ANDROID_INFO_SUPPORTED_BUFFER_MANAGEMENT_VERSION_HIDL_DEVICE_3_5);
    }

    mInterface = new HalInterface(session, queue, mUseHalBufManager);
    std::string providerType;
    mVendorTagId = manager->getProviderTagIdLocked(mId.string());
    mTagMonitor.initialize(mVendorTagId);
    if (!monitorTags.isEmpty()) {
    mTagMonitor.parseTagsToMonitor(String8(monitorTags));
    }

    // Metadata tags needs fixup for monochrome camera device version less
    // than 3.5.
    hardware::hidl_version maxVersion{0,0};
    res = manager->getHighestSupportedVersion(mId.string(), &maxVersion);
    if (res != OK) {
    ALOGE("%s: Error in getting camera device version id: %s (%d)",
    __FUNCTION__, strerror(-res), res);
    return res;
    }
    int deviceVersion = HARDWARE_DEVICE_API_VERSION(
    maxVersion.get_major(), maxVersion.get_minor());

    bool isMonochrome = false;
    camera_metadata_entry_t entry = mDeviceInfo.find(ANDROID_REQUEST_AVAILABLE_CAPABILITIES);
    for (size_t i = 0; i < entry.count; i++) {
    uint8_t capability = entry.data.u8[i];
    if (capability == ANDROID_REQUEST_AVAILABLE_CAPABILITIES_MONOCHROME) {
    isMonochrome = true;
    }
    }
    mNeedFixupMonochromeTags = (isMonochrome && deviceVersion < CAMERA_DEVICE_API_VERSION_3_5);

    return initializeCommonLocked();
    }

    status_t Camera3Device::initializeCommonLocked() {

    /** Start up status tracker thread */
    mStatusTracker = new StatusTracker(this);
    status_t res = mStatusTracker->run(String8::format("C3Dev-%s-Status", mId.string()).string());
    if (res != OK) {
    SET_ERR_L("Unable to start status tracking thread: %s (%d)",
    strerror(-res), res);
    mInterface->close();
    mStatusTracker.clear();
    return res;
    }

    /** Register in-flight map to the status tracker */
    mInFlightStatusId = mStatusTracker->addComponent();

    if (mUseHalBufManager) {
    res = mRequestBufferSM.initialize(mStatusTracker);
    if (res != OK) {
    SET_ERR_L("Unable to start request buffer state machine: %s (%d)",
    strerror(-res), res);
    mInterface->close();
    mStatusTracker.clear();
    return res;
    }
    }

    /** Create buffer manager */
    mBufferManager = new Camera3BufferManager();

    Vector<int32_t> sessionParamKeys;
    camera_metadata_entry_t sessionKeysEntry = mDeviceInfo.find(
    ANDROID_REQUEST_AVAILABLE_SESSION_KEYS);
    if (sessionKeysEntry.count > 0) {
    sessionParamKeys.insertArrayAt(sessionKeysEntry.data.i32, 0, sessionKeysEntry.count);
    }

    /** Start up request queue thread */
    mRequestThread = new RequestThread(
    this, mStatusTracker, mInterface, sessionParamKeys, mUseHalBufManager);
    res = mRequestThread->run(String8::format("C3Dev-%s-ReqQueue", mId.string()).string());
    if (res != OK) {
    SET_ERR_L("Unable to start request queue thread: %s (%d)",
    strerror(-res), res);
    mInterface->close();
    mRequestThread.clear();
    return res;
    }

    mPreparerThread = new PreparerThread();

    internalUpdateStatusLocked(STATUS_UNCONFIGURED);
    mNextStreamId = 0;
    mDummyStreamId = NO_STREAM;
    mNeedConfig = true;
    mPauseStateNotify = false;

    // Measure the clock domain offset between camera and video/hw_composer
    camera_metadata_entry timestampSource =
    mDeviceInfo.find(ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE);
    if (timestampSource.count > 0 && timestampSource.data.u8[0] ==
    ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE_REALTIME) {
    mTimestampOffset = getMonoToBoottimeOffset();
    }

    // Will the HAL be sending in early partial result metadata?
    camera_metadata_entry partialResultsCount =
    mDeviceInfo.find(ANDROID_REQUEST_PARTIAL_RESULT_COUNT);
    if (partialResultsCount.count > 0) {
    mNumPartialResults = partialResultsCount.data.i32[0];
    mUsePartialResult = (mNumPartialResults > 1);
    }

    camera_metadata_entry configs =
    mDeviceInfo.find(ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS);
    for (uint32_t i = 0; i < configs.count; i += 4) {
    if (configs.data.i32[i] == HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED &&
    configs.data.i32[i + 3] ==
    ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS_INPUT) {
    mSupportedOpaqueInputSizes.add(Size(configs.data.i32[i + 1],
    configs.data.i32[i + 2]));
    }
    }

    if (DistortionMapper::isDistortionSupported(mDeviceInfo)) {
    res = mDistortionMappers[mId.c_str()].setupStaticInfo(mDeviceInfo);
    if (res != OK) {
    SET_ERR_L("Unable to read necessary calibration fields for distortion correction");
    return res;
    }
    }

    notifyCameraState(true);
    return OK;
    }
  • CameraProviderManager::openSession

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    status_t CameraProviderManager::openSession(const std::string &id,
    const sp<device::V3_2::ICameraDeviceCallback>& callback,
    /*out*/
    sp<device::V3_2::ICameraDeviceSession> *session) {

    std::lock_guard<std::mutex> lock(mInterfaceMutex);

    auto deviceInfo = findDeviceInfoLocked(id,
    /*minVersion*/ {3,0}, /*maxVersion*/ {4,0});
    if (deviceInfo == nullptr) return NAME_NOT_FOUND;

    auto *deviceInfo3 = static_cast<ProviderInfo::DeviceInfo3*>(deviceInfo);
    const sp<provider::V2_4::ICameraProvider> provider =
    deviceInfo->mParentProvider->startProviderInterface();
    if (provider == nullptr) {
    return DEAD_OBJECT;
    }
    saveRef(DeviceMode::CAMERA, id, provider);

    Status status;
    hardware::Return<void> ret;
    auto interface = deviceInfo3->startDeviceInterface<
    CameraProviderManager::ProviderInfo::DeviceInfo3::InterfaceT>();
    if (interface == nullptr) {
    return DEAD_OBJECT;
    }

    ret = interface->open(callback, [&status, &session]
    (Status s, const sp<device::V3_2::ICameraDeviceSession>& cameraSession) {
    status = s;
    if (status == Status::OK) {
    *session = cameraSession;
    }
    });
    if (!ret.isOk()) {
    removeRef(DeviceMode::CAMERA, id);
    ALOGE("%s: Transaction error opening a session for camera device %s: %s",
    __FUNCTION__, id.c_str(), ret.description().c_str());
    return DEAD_OBJECT;
    }
    return mapToStatusT(status);
    }
3.2.2.5 小结

对于打开相机设备动作,主要由connectDevice来实现;

当CameraFramework通过调用ICameraService的connectDevice接口的时候,主要做了两件事情:

  • 一个是创建CameraDeviceClient。
  • 一个是对CameraDeviceClient进行初始化,并将其给Framework。

而其中创建CameraDevcieClient的工作是通过makeClient方法来实现的,在该方法中首先实例化一个CameraDeviceClient,并且将来自Framework针对ICameraDeviceCallbacks的实现类CameraDeviceImpl.CameraDeviceCallbacks存入CameraDeviceClient中,这样一旦有结果产生便可以将结果通过这个回调回传给Framework,其次还实例化了一个Camera3Device对象。

其中的CameraDeviceClient的初始化工作是通过调用其initialize方法来完成的,在该方法中:

  • 首先调用父类Camera2ClientBase的initialize方法进行初始化。
  • 其次实例化FrameProcessorBase对象并且将内部的Camera3Device对象传入其中,这样就建立了FrameProcessorBase和Camera3Device的联系,之后将内部线程运行起来,等待来自Camera3Device的结果。
  • 最后将CameraDeviceClient注册到FrameProcessorBase内部,这样就建立了与CameraDeviceClient的联系。

而在Camera2ClientBase的intialize方法中会调用Camera3Device的intialize方法对其进行初始化工作,并且通过调用Camera3Device的setNotifyCallback方法将自身注册到Camera3Device内部,这样一旦Camera3Device有结果产生就可以发送到CameraDeviceClient中。

而在Camera3Device的初始化过程中,首先通过调用CameraProviderManager的openSession方法打开并获取一个Provider中的ICameraDeviceSession代理,其次实例化一个HalInterface对象,将之前获取的ICameraDeviceSession代理存入其中,最后将RequestThread线程运行起来,等待Request的下发。

而对于CameraProviderManager的openSession方法,它会通过内部的DeviceInfo保存ICameraDevice代理,调用其open方法从Camera Provider中打开并获取一个ICameraDeviceSession远程代理,并且由于Camera3Device实现了Provider中ICameraDeviceCallback方法,会通过该open方法传入到Provider中,接收来自Provider的结果回传。

至此,整个connectDevice方法已经运行完毕,此时App已经获取了一个Camera设备,紧接着,由于需要采集图像,所以需要再次调用CameraDevice的createCaptureSession操作,到达Framework,再通过ICameraDeviceUser代理进行了一系列操作,分别包含了cancelRequest/beginConfigure/deleteStream/createStream以及endConfigure方法来进行数据流的配置。

3.2.3 配置数据流

3.2.3.1 cancelRequest

[->frameworks\av\services\camera\libcameraservice\CameraDeviceClient.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
binder::Status CameraDeviceClient::cancelRequest(
int requestId,
/*out*/
int64_t* lastFrameNumber) {
ATRACE_CALL();
ALOGV("%s, requestId = %d", __FUNCTION__, requestId);

status_t err;
binder::Status res;

if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

Mutex::Autolock icl(mBinderSerializationLock);

if (!mDevice.get()) {
return STATUS_ERROR(CameraService::ERROR_DISCONNECTED, "Camera device no longer alive");
}

Mutex::Autolock idLock(mStreamingRequestIdLock);
if (mStreamingRequestId != requestId) {
String8 msg = String8::format("Camera %s: Canceling request ID %d doesn't match "
"current request ID %d", mCameraIdStr.string(), requestId, mStreamingRequestId);
ALOGE("%s: %s", __FUNCTION__, msg.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT, msg.string());
}

err = mDevice->clearStreamingRequest(lastFrameNumber);

if (err == OK) {
ALOGV("%s: Camera %s: Successfully cleared streaming request",
__FUNCTION__, mCameraIdStr.string());
mStreamingRequestId = REQUEST_ID_NONE;
} else {
res = STATUS_ERROR_FMT(CameraService::ERROR_INVALID_OPERATION,
"Camera %s: Error clearing streaming request: %s (%d)",
mCameraIdStr.string(), strerror(-err), err);
}

return res;
}
3.2.3.2 beginConfigure
1
2
3
4
5
6
binder::Status CameraDeviceClient::beginConfigure() {
// TODO: Implement this.
ATRACE_CALL();
ALOGV("%s: Not implemented yet.", __FUNCTION__);
return binder::Status::ok();
}
3.2.3.3 deleteStream
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
binder::Status CameraDeviceClient::deleteStream(int streamId) {
ATRACE_CALL();
ALOGV("%s (streamId = 0x%x)", __FUNCTION__, streamId);

binder::Status res;
if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

Mutex::Autolock icl(mBinderSerializationLock);

if (!mDevice.get()) {
return STATUS_ERROR(CameraService::ERROR_DISCONNECTED, "Camera device no longer alive");
}

bool isInput = false;
std::vector<sp<IBinder>> surfaces;
ssize_t dIndex = NAME_NOT_FOUND;
ssize_t compositeIndex = NAME_NOT_FOUND;

if (mInputStream.configured && mInputStream.id == streamId) {
isInput = true;
} else {
// Guard against trying to delete non-created streams
for (size_t i = 0; i < mStreamMap.size(); ++i) {
if (streamId == mStreamMap.valueAt(i).streamId()) {
surfaces.push_back(mStreamMap.keyAt(i));
}
}

// See if this stream is one of the deferred streams.
for (size_t i = 0; i < mDeferredStreams.size(); ++i) {
if (streamId == mDeferredStreams[i]) {
dIndex = i;
break;
}
}

for (size_t i = 0; i < mCompositeStreamMap.size(); ++i) {
if (streamId == mCompositeStreamMap.valueAt(i)->getStreamId()) {
compositeIndex = i;
break;
}
}

if (surfaces.empty() && dIndex == NAME_NOT_FOUND) {
String8 msg = String8::format("Camera %s: Invalid stream ID (%d) specified, no such"
" stream created yet", mCameraIdStr.string(), streamId);
ALOGW("%s: %s", __FUNCTION__, msg.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT, msg.string());
}
}

// Also returns BAD_VALUE if stream ID was not valid
status_t err = mDevice->deleteStream(streamId);

if (err != OK) {
String8 msg = String8::format("Camera %s: Unexpected error %s (%d) when deleting stream %d",
mCameraIdStr.string(), strerror(-err), err, streamId);
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
} else {
if (isInput) {
mInputStream.configured = false;
} else {
for (auto& surface : surfaces) {
mStreamMap.removeItem(surface);
}

mConfiguredOutputs.removeItem(streamId);

if (dIndex != NAME_NOT_FOUND) {
mDeferredStreams.removeItemsAt(dIndex);
}

if (compositeIndex != NAME_NOT_FOUND) {
status_t ret;
if ((ret = mCompositeStreamMap.valueAt(compositeIndex)->deleteStream())
!= OK) {
String8 msg = String8::format("Camera %s: Unexpected error %s (%d) when "
"deleting composite stream %d", mCameraIdStr.string(), strerror(-err), err,
streamId);
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
}
mCompositeStreamMap.removeItemsAt(compositeIndex);
}
}
}

return res;
}
3.2.3.4 createStream
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
binder::Status CameraDeviceClient::createStream(
const hardware::camera2::params::OutputConfiguration &outputConfiguration,
/*out*/
int32_t* newStreamId) {
ATRACE_CALL();

binder::Status res;
if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

Mutex::Autolock icl(mBinderSerializationLock);

const std::vector<sp<IGraphicBufferProducer>>& bufferProducers =
outputConfiguration.getGraphicBufferProducers();
size_t numBufferProducers = bufferProducers.size();
bool deferredConsumer = outputConfiguration.isDeferred();
bool isShared = outputConfiguration.isShared();
String8 physicalCameraId = String8(outputConfiguration.getPhysicalCameraId());
bool deferredConsumerOnly = deferredConsumer && numBufferProducers == 0;

res = checkSurfaceTypeLocked(numBufferProducers, deferredConsumer,
outputConfiguration.getSurfaceType());
if (!res.isOk()) {
return res;
}

if (!mDevice.get()) {
return STATUS_ERROR(CameraService::ERROR_DISCONNECTED, "Camera device no longer alive");
}

res = checkPhysicalCameraIdLocked(physicalCameraId);
if (!res.isOk()) {
return res;
}

std::vector<sp<Surface>> surfaces;
std::vector<sp<IBinder>> binders;
status_t err;

// Create stream for deferred surface case.
if (deferredConsumerOnly) {
return createDeferredSurfaceStreamLocked(outputConfiguration, isShared, newStreamId);
}

OutputStreamInfo streamInfo;
bool isStreamInfoValid = false;
for (auto& bufferProducer : bufferProducers) {
// Don't create multiple streams for the same target surface
sp<IBinder> binder = IInterface::asBinder(bufferProducer);
ssize_t index = mStreamMap.indexOfKey(binder);
if (index != NAME_NOT_FOUND) {
String8 msg = String8::format("Camera %s: Surface already has a stream created for it "
"(ID %zd)", mCameraIdStr.string(), index);
ALOGW("%s: %s", __FUNCTION__, msg.string());
return STATUS_ERROR(CameraService::ERROR_ALREADY_EXISTS, msg.string());
}

sp<Surface> surface;
res = createSurfaceFromGbp(streamInfo, isStreamInfoValid, surface, bufferProducer,
physicalCameraId);

if (!res.isOk())
return res;

if (!isStreamInfoValid) {
isStreamInfoValid = true;
}

binders.push_back(IInterface::asBinder(bufferProducer));
surfaces.push_back(surface);
}

int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
std::vector<int> surfaceIds;
bool isDepthCompositeStream = camera3::DepthCompositeStream::isDepthCompositeStream(surfaces[0]);
bool isHeicCompisiteStream = camera3::HeicCompositeStream::isHeicCompositeStream(surfaces[0]);
if (isDepthCompositeStream || isHeicCompisiteStream) {
sp<CompositeStream> compositeStream;
if (isDepthCompositeStream) {
compositeStream = new camera3::DepthCompositeStream(mDevice, getRemoteCallback());
} else {
compositeStream = new camera3::HeicCompositeStream(mDevice, getRemoteCallback());
}

err = compositeStream->createStream(surfaces, deferredConsumer, streamInfo.width,
streamInfo.height, streamInfo.format,
static_cast<camera3_stream_rotation_t>(outputConfiguration.getRotation()),
&streamId, physicalCameraId, &surfaceIds, outputConfiguration.getSurfaceSetID(),
isShared);
if (err == OK) {
mCompositeStreamMap.add(IInterface::asBinder(surfaces[0]->getIGraphicBufferProducer()),
compositeStream);
}
} else {
err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
streamInfo.height, streamInfo.format, streamInfo.dataSpace,
static_cast<camera3_stream_rotation_t>(outputConfiguration.getRotation()),
&streamId, physicalCameraId, &surfaceIds, outputConfiguration.getSurfaceSetID(),
isShared);
}

if (err != OK) {
res = STATUS_ERROR_FMT(CameraService::ERROR_INVALID_OPERATION,
"Camera %s: Error creating output stream (%d x %d, fmt %x, dataSpace %x): %s (%d)",
mCameraIdStr.string(), streamInfo.width, streamInfo.height, streamInfo.format,
streamInfo.dataSpace, strerror(-err), err);
} else {
int i = 0;
for (auto& binder : binders) {
ALOGV("%s: mStreamMap add binder %p streamId %d, surfaceId %d",
__FUNCTION__, binder.get(), streamId, i);
mStreamMap.add(binder, StreamSurfaceId(streamId, surfaceIds[i]));
i++;
}

mConfiguredOutputs.add(streamId, outputConfiguration);
mStreamInfoMap[streamId] = streamInfo;

ALOGV("%s: Camera %s: Successfully created a new stream ID %d for output surface"
" (%d x %d) with format 0x%x.",
__FUNCTION__, mCameraIdStr.string(), streamId, streamInfo.width,
streamInfo.height, streamInfo.format);

// Set transform flags to ensure preview to be rotated correctly.
res = setStreamTransformLocked(streamId);

*newStreamId = streamId;
}

return res;
}
3.2.3.5 endConfigure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
binder::Status CameraDeviceClient::endConfigure(int operatingMode,
const hardware::camera2::impl::CameraMetadataNative& sessionParams) {
ATRACE_CALL();
ALOGV("%s: ending configure (%d input stream, %zu output surfaces)",
__FUNCTION__, mInputStream.configured ? 1 : 0,
mStreamMap.size());

binder::Status res;
if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

Mutex::Autolock icl(mBinderSerializationLock);

if (!mDevice.get()) {
return STATUS_ERROR(CameraService::ERROR_DISCONNECTED, "Camera device no longer alive");
}

res = checkOperatingModeLocked(operatingMode);
if (!res.isOk()) {
return res;
}

status_t err = mDevice->configureStreams(sessionParams, operatingMode);
if (err == BAD_VALUE) {
String8 msg = String8::format("Camera %s: Unsupported set of inputs/outputs provided",
mCameraIdStr.string());
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT, msg.string());
} else if (err != OK) {
String8 msg = String8::format("Camera %s: Error configuring streams: %s (%d)",
mCameraIdStr.string(), strerror(-err), err);
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
} else {
for (size_t i = 0; i < mCompositeStreamMap.size(); ++i) {
err = mCompositeStreamMap.valueAt(i)->configureStream();
if (err != OK ) {
String8 msg = String8::format("Camera %s: Error configuring composite "
"streams: %s (%d)", mCameraIdStr.string(), strerror(-err), err);
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
break;
}
}
}

return res;
}
3.2.3.6 小结

cancelRequest逻辑比较简单,对应的方法是CameraDeviceClient的cancelRequest方法,在该方法中会去通知Camera3Device将RequestThread中的Request队列清空,停止Request的继续下发。

beginConfigure方法是空实现。

deleteStream/createStream 分别是用于删除之前的数据流以及为新的操作创建数据流。

紧接着调用位于整个调用流程的末尾endConfigure方法,该方法对应着CameraDeviceClient的endConfigure方法,其逻辑比较简单,在该方法中会调用Camera3Device的configureStreams的方法,而该方法又会去通过ICameraDeviceSession的configureStreams_3_4的方法最终将需求传递给Provider。

到这里整个数据流已经配置完成,并且App也获取了Framework中的CameraCaptureSession对象,之后便可进行图像需求的下发了,在下发之前需要先创建一个Request,而App通过调用CameraDeviceImpl中的createCaptureRequest来实现,该方法在Framework中实现,内部会再去调用Camera Service中的AIDL接口createDefaultRequest,该接口的实现是CameraDeviceClient,在其内部又会去调用Camera3Device的createDefaultRequest方法,最后通过ICameraDeviceSession代理的constructDefaultRequestSettings方法将需求下发到Provider端去创建一个默认的Request配置,一旦操作完成,Provider会将配置上传至Service,进而给到App中。

3.2.4 处理图像需求

3.2.4.1 createDefaultRequest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// Create a request object from a template.
binder::Status CameraDeviceClient::createDefaultRequest(int templateId,
/*out*/
hardware::camera2::impl::CameraMetadataNative* request)
{
ATRACE_CALL();
ALOGV("%s (templateId = 0x%x)", __FUNCTION__, templateId);

binder::Status res;
if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

Mutex::Autolock icl(mBinderSerializationLock);

if (!mDevice.get()) {
return STATUS_ERROR(CameraService::ERROR_DISCONNECTED, "Camera device no longer alive");
}

CameraMetadata metadata;
status_t err;
if ( (err = mDevice->createDefaultRequest(templateId, &metadata) ) == OK &&
request != NULL) {

request->swap(metadata);
} else if (err == BAD_VALUE) {
res = STATUS_ERROR_FMT(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Camera %s: Template ID %d is invalid or not supported: %s (%d)",
mCameraIdStr.string(), templateId, strerror(-err), err);

} else {
res = STATUS_ERROR_FMT(CameraService::ERROR_INVALID_OPERATION,
"Camera %s: Error creating default request for template %d: %s (%d)",
mCameraIdStr.string(), templateId, strerror(-err), err);
}
return res;
}
3.2.4.2 createDefaultRequest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
status_t Camera3Device::createDefaultRequest(int templateId,
CameraMetadata *request) {
ATRACE_CALL();
ALOGV("%s: for template %d", __FUNCTION__, templateId);

if (templateId <= 0 || templateId >= CAMERA3_TEMPLATE_COUNT) {
android_errorWriteWithInfoLog(CameraService::SN_EVENT_LOG_ID, "26866110",
CameraThreadState::getCallingUid(), nullptr, 0);
return BAD_VALUE;
}

Mutex::Autolock il(mInterfaceLock);

{
Mutex::Autolock l(mLock);
switch (mStatus) {
case STATUS_ERROR:
CLOGE("Device has encountered a serious error");
return INVALID_OPERATION;
case STATUS_UNINITIALIZED:
CLOGE("Device is not initialized!");
return INVALID_OPERATION;
case STATUS_UNCONFIGURED:
case STATUS_CONFIGURED:
case STATUS_ACTIVE:
// OK
break;
default:
SET_ERR_L("Unexpected status: %d", mStatus);
return INVALID_OPERATION;
}

if (!mRequestTemplateCache[templateId].isEmpty()) {
*request = mRequestTemplateCache[templateId];
mLastTemplateId = templateId;
return OK;
}
}

camera_metadata_t *rawRequest;
status_t res = mInterface->constructDefaultRequestSettings(
(camera3_request_template_t) templateId, &rawRequest);

{
Mutex::Autolock l(mLock);
if (res == BAD_VALUE) {
ALOGI("%s: template %d is not supported on this camera device",
__FUNCTION__, templateId);
return res;
} else if (res != OK) {
CLOGE("Unable to construct request template %d: %s (%d)",
templateId, strerror(-res), res);
return res;
}

set_camera_metadata_vendor_id(rawRequest, mVendorTagId);
mRequestTemplateCache[templateId].acquire(rawRequest);

*request = mRequestTemplateCache[templateId];
mLastTemplateId = templateId;
}
return OK;
}
3.2.4.3 submitRequestList
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
binder::Status CameraDeviceClient::submitRequestList(
const std::vector<hardware::camera2::CaptureRequest>& requests,
bool streaming,
/*out*/
hardware::camera2::utils::SubmitInfo *submitInfo) {
ATRACE_CALL();
ALOGV("%s-start of function. Request list size %zu", __FUNCTION__, requests.size());

binder::Status res = binder::Status::ok();
status_t err;
if ( !(res = checkPidStatus(__FUNCTION__) ).isOk()) {
return res;
}

Mutex::Autolock icl(mBinderSerializationLock);

if (!mDevice.get()) {
return STATUS_ERROR(CameraService::ERROR_DISCONNECTED, "Camera device no longer alive");
}

if (requests.empty()) {
ALOGE("%s: Camera %s: Sent null request. Rejecting request.",
__FUNCTION__, mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT, "Empty request list");
}

List<const CameraDeviceBase::PhysicalCameraSettingsList> metadataRequestList;
std::list<const SurfaceMap> surfaceMapList;
submitInfo->mRequestId = mRequestIdCounter;
uint32_t loopCounter = 0;

for (auto&& request: requests) {
if (request.mIsReprocess) {
if (!mInputStream.configured) {
ALOGE("%s: Camera %s: no input stream is configured.", __FUNCTION__,
mCameraIdStr.string());
return STATUS_ERROR_FMT(CameraService::ERROR_ILLEGAL_ARGUMENT,
"No input configured for camera %s but request is for reprocessing",
mCameraIdStr.string());
} else if (streaming) {
ALOGE("%s: Camera %s: streaming reprocess requests not supported.", __FUNCTION__,
mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Repeating reprocess requests not supported");
} else if (request.mPhysicalCameraSettings.size() > 1) {
ALOGE("%s: Camera %s: reprocess requests not supported for "
"multiple physical cameras.", __FUNCTION__,
mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Reprocess requests not supported for multiple cameras");
}
}

if (request.mPhysicalCameraSettings.empty()) {
ALOGE("%s: Camera %s: request doesn't contain any settings.", __FUNCTION__,
mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Request doesn't contain any settings");
}

//The first capture settings should always match the logical camera id
String8 logicalId(request.mPhysicalCameraSettings.begin()->id.c_str());
if (mDevice->getId() != logicalId) {
ALOGE("%s: Camera %s: Invalid camera request settings.", __FUNCTION__,
mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Invalid camera request settings");
}

if (request.mSurfaceList.isEmpty() && request.mStreamIdxList.size() == 0) {
ALOGE("%s: Camera %s: Requests must have at least one surface target. "
"Rejecting request.", __FUNCTION__, mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Request has no output targets");
}

/**
* Write in the output stream IDs and map from stream ID to surface ID
* which we calculate from the capture request's list of surface target
*/
SurfaceMap surfaceMap;
Vector<int32_t> outputStreamIds;
std::vector<std::string> requestedPhysicalIds;
if (request.mSurfaceList.size() > 0) {
for (const sp<Surface>& surface : request.mSurfaceList) {
if (surface == 0) continue;

int32_t streamId;
sp<IGraphicBufferProducer> gbp = surface->getIGraphicBufferProducer();
res = insertGbpLocked(gbp, &surfaceMap, &outputStreamIds, &streamId);
if (!res.isOk()) {
return res;
}

ssize_t index = mConfiguredOutputs.indexOfKey(streamId);
if (index >= 0) {
String8 requestedPhysicalId(
mConfiguredOutputs.valueAt(index).getPhysicalCameraId());
requestedPhysicalIds.push_back(requestedPhysicalId.string());
} else {
ALOGW("%s: Output stream Id not found among configured outputs!", __FUNCTION__);
}
}
} else {
for (size_t i = 0; i < request.mStreamIdxList.size(); i++) {
int streamId = request.mStreamIdxList.itemAt(i);
int surfaceIdx = request.mSurfaceIdxList.itemAt(i);

ssize_t index = mConfiguredOutputs.indexOfKey(streamId);
if (index < 0) {
ALOGE("%s: Camera %s: Tried to submit a request with a surface that"
" we have not called createStream on: stream %d",
__FUNCTION__, mCameraIdStr.string(), streamId);
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Request targets Surface that is not part of current capture session");
}

const auto& gbps = mConfiguredOutputs.valueAt(index).getGraphicBufferProducers();
if ((size_t)surfaceIdx >= gbps.size()) {
ALOGE("%s: Camera %s: Tried to submit a request with a surface that"
" we have not called createStream on: stream %d, surfaceIdx %d",
__FUNCTION__, mCameraIdStr.string(), streamId, surfaceIdx);
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Request targets Surface has invalid surface index");
}

res = insertGbpLocked(gbps[surfaceIdx], &surfaceMap, &outputStreamIds, nullptr);
if (!res.isOk()) {
return res;
}

String8 requestedPhysicalId(
mConfiguredOutputs.valueAt(index).getPhysicalCameraId());
requestedPhysicalIds.push_back(requestedPhysicalId.string());
}
}

CameraDeviceBase::PhysicalCameraSettingsList physicalSettingsList;
for (const auto& it : request.mPhysicalCameraSettings) {
if (it.settings.isEmpty()) {
ALOGE("%s: Camera %s: Sent empty metadata packet. Rejecting request.",
__FUNCTION__, mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Request settings are empty");
}

String8 physicalId(it.id.c_str());
if (physicalId != mDevice->getId()) {
auto found = std::find(requestedPhysicalIds.begin(), requestedPhysicalIds.end(),
it.id);
if (found == requestedPhysicalIds.end()) {
ALOGE("%s: Camera %s: Physical camera id: %s not part of attached outputs.",
__FUNCTION__, mCameraIdStr.string(), physicalId.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Invalid physical camera id");
}

if (!mSupportedPhysicalRequestKeys.empty()) {
// Filter out any unsupported physical request keys.
CameraMetadata filteredParams(mSupportedPhysicalRequestKeys.size());
camera_metadata_t *meta = const_cast<camera_metadata_t *>(
filteredParams.getAndLock());
set_camera_metadata_vendor_id(meta, mDevice->getVendorTagId());
filteredParams.unlock(meta);

for (const auto& keyIt : mSupportedPhysicalRequestKeys) {
camera_metadata_ro_entry entry = it.settings.find(keyIt);
if (entry.count > 0) {
filteredParams.update(entry);
}
}

physicalSettingsList.push_back({it.id, filteredParams});
}
} else {
physicalSettingsList.push_back({it.id, it.settings});
}
}

if (!enforceRequestPermissions(physicalSettingsList.begin()->metadata)) {
// Callee logs
return STATUS_ERROR(CameraService::ERROR_PERMISSION_DENIED,
"Caller does not have permission to change restricted controls");
}

physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_OUTPUT_STREAMS,
&outputStreamIds[0], outputStreamIds.size());

if (request.mIsReprocess) {
physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_INPUT_STREAMS,
&mInputStream.id, 1);
}

physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_ID,
&(submitInfo->mRequestId), /*size*/1);
loopCounter++; // loopCounter starts from 1
ALOGV("%s: Camera %s: Creating request with ID %d (%d of %zu)",
__FUNCTION__, mCameraIdStr.string(), submitInfo->mRequestId,
loopCounter, requests.size());
metadataRequestList.push_back(physicalSettingsList);
surfaceMapList.push_back(surfaceMap);
}
mRequestIdCounter++;

if (streaming) {
err = mDevice->setStreamingRequestList(metadataRequestList, surfaceMapList,
&(submitInfo->mLastFrameNumber));
if (err != OK) {
String8 msg = String8::format(
"Camera %s: Got error %s (%d) after trying to set streaming request",
mCameraIdStr.string(), strerror(-err), err);
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION,
msg.string());
} else {
Mutex::Autolock idLock(mStreamingRequestIdLock);
mStreamingRequestId = submitInfo->mRequestId;
}
} else {
err = mDevice->captureList(metadataRequestList, surfaceMapList,
&(submitInfo->mLastFrameNumber));
if (err != OK) {
String8 msg = String8::format(
"Camera %s: Got error %s (%d) after trying to submit capture request",
mCameraIdStr.string(), strerror(-err), err);
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION,
msg.string());
}
ALOGV("%s: requestId = %d ", __FUNCTION__, submitInfo->mRequestId);
}

ALOGV("%s: Camera %s: End of function", __FUNCTION__, mCameraIdStr.string());
return res;
}
3.2.4.4 setStreamingRequestList
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
status_t Camera3Device::setStreamingRequestList(
const List<const PhysicalCameraSettingsList> &requestsList,
const std::list<const SurfaceMap> &surfaceMaps, int64_t *lastFrameNumber) {
ATRACE_CALL();

return submitRequestsHelper(requestsList, surfaceMaps, /*repeating*/true, lastFrameNumber);
}

status_t Camera3Device::submitRequestsHelper(
const List<const PhysicalCameraSettingsList> &requests,
const std::list<const SurfaceMap> &surfaceMaps,
bool repeating,
/*out*/
int64_t *lastFrameNumber) {
ATRACE_CALL();
Mutex::Autolock il(mInterfaceLock);
Mutex::Autolock l(mLock);

status_t res = checkStatusOkToCaptureLocked();
if (res != OK) {
// error logged by previous call
return res;
}

RequestList requestList;

res = convertMetadataListToRequestListLocked(requests, surfaceMaps,
repeating, /*out*/&requestList);
if (res != OK) {
// error logged by previous call
return res;
}

if (repeating) {
res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
} else {
res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
}

if (res == OK) {
waitUntilStateThenRelock(/*active*/true, kActiveTimeout);
if (res != OK) {
SET_ERR_L("Can't transition to active in %f seconds!",
kActiveTimeout/1e9);
}
ALOGV("Camera %s: Capture request %" PRId32 " enqueued", mId.string(),
(*(requestList.begin()))->mResultExtras.requestId);
} else {
CLOGE("Cannot queue request. Impossible.");
return BAD_VALUE;
}

return res;
}
3.2.4.5 sendRequestsBatch
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
bool Camera3Device::RequestThread::sendRequestsBatch() {
ATRACE_CALL();
status_t res;
size_t batchSize = mNextRequests.size();
std::vector<camera3_capture_request_t*> requests(batchSize);
uint32_t numRequestProcessed = 0;
for (size_t i = 0; i < batchSize; i++) {
requests[i] = &mNextRequests.editItemAt(i).halRequest;
ATRACE_ASYNC_BEGIN("frame capture", mNextRequests[i].halRequest.frame_number);
}

res = mInterface->processBatchCaptureRequests(requests, &numRequestProcessed);

bool triggerRemoveFailed = false;
NextRequest& triggerFailedRequest = mNextRequests.editItemAt(0);
for (size_t i = 0; i < numRequestProcessed; i++) {
NextRequest& nextRequest = mNextRequests.editItemAt(i);
nextRequest.submitted = true;

updateNextRequest(nextRequest);

if (!triggerRemoveFailed) {
// Remove any previously queued triggers (after unlock)
status_t removeTriggerRes = removeTriggers(mPrevRequest);
if (removeTriggerRes != OK) {
triggerRemoveFailed = true;
triggerFailedRequest = nextRequest;
}
}
}

if (triggerRemoveFailed) {
SET_ERR("RequestThread: Unable to remove triggers "
"(capture request %d, HAL device: %s (%d)",
triggerFailedRequest.halRequest.frame_number, strerror(-res), res);
cleanUpFailedRequests(/*sendRequestError*/ false);
return false;
}

if (res != OK) {
// Should only get a failure here for malformed requests or device-level
// errors, so consider all errors fatal. Bad metadata failures should
// come through notify.
SET_ERR("RequestThread: Unable to submit capture request %d to HAL device: %s (%d)",
mNextRequests[numRequestProcessed].halRequest.frame_number,
strerror(-res), res);
cleanUpFailedRequests(/*sendRequestError*/ false);
return false;
}
return true;
}
3.2.4.6 processBatchCaptureRequests
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
status_t Camera3Device::HalInterface::processBatchCaptureRequests(
std::vector<camera3_capture_request_t*>& requests,/*out*/uint32_t* numRequestProcessed) {
ATRACE_NAME("CameraHal::processBatchCaptureRequests");
if (!valid()) return INVALID_OPERATION;

sp<device::V3_4::ICameraDeviceSession> hidlSession_3_4;
auto castResult_3_4 = device::V3_4::ICameraDeviceSession::castFrom(mHidlSession);
if (castResult_3_4.isOk()) {
hidlSession_3_4 = castResult_3_4;
}

hardware::hidl_vec<device::V3_2::CaptureRequest> captureRequests;
hardware::hidl_vec<device::V3_4::CaptureRequest> captureRequests_3_4;
size_t batchSize = requests.size();
if (hidlSession_3_4 != nullptr) {
captureRequests_3_4.resize(batchSize);
} else {
captureRequests.resize(batchSize);
}
std::vector<native_handle_t*> handlesCreated;
std::vector<std::pair<int32_t, int32_t>> inflightBuffers;

status_t res = OK;
for (size_t i = 0; i < batchSize; i++) {
if (hidlSession_3_4 != nullptr) {
res = wrapAsHidlRequest(requests[i], /*out*/&captureRequests_3_4[i].v3_2,
/*out*/&handlesCreated, /*out*/&inflightBuffers);
} else {
res = wrapAsHidlRequest(requests[i], /*out*/&captureRequests[i],
/*out*/&handlesCreated, /*out*/&inflightBuffers);
}
if (res != OK) {
popInflightBuffers(inflightBuffers);
cleanupNativeHandles(&handlesCreated);
return res;
}
}

std::vector<device::V3_2::BufferCache> cachesToRemove;
{
std::lock_guard<std::mutex> lock(mBufferIdMapLock);
for (auto& pair : mFreedBuffers) {
// The stream might have been removed since onBufferFreed
if (mBufferIdMaps.find(pair.first) != mBufferIdMaps.end()) {
cachesToRemove.push_back({pair.first, pair.second});
}
}
mFreedBuffers.clear();
}

common::V1_0::Status status = common::V1_0::Status::INTERNAL_ERROR;
*numRequestProcessed = 0;

// Write metadata to FMQ.
for (size_t i = 0; i < batchSize; i++) {
camera3_capture_request_t* request = requests[i];
device::V3_2::CaptureRequest* captureRequest;
if (hidlSession_3_4 != nullptr) {
captureRequest = &captureRequests_3_4[i].v3_2;
} else {
captureRequest = &captureRequests[i];
}

if (request->settings != nullptr) {
size_t settingsSize = get_camera_metadata_size(request->settings);
if (mRequestMetadataQueue != nullptr && mRequestMetadataQueue->write(
reinterpret_cast<const uint8_t*>(request->settings), settingsSize)) {
captureRequest->settings.resize(0);
captureRequest->fmqSettingsSize = settingsSize;
} else {
if (mRequestMetadataQueue != nullptr) {
ALOGW("%s: couldn't utilize fmq, fallback to hwbinder", __FUNCTION__);
}
captureRequest->settings.setToExternal(
reinterpret_cast<uint8_t*>(const_cast<camera_metadata_t*>(request->settings)),
get_camera_metadata_size(request->settings));
captureRequest->fmqSettingsSize = 0u;
}
} else {
// A null request settings maps to a size-0 CameraMetadata
captureRequest->settings.resize(0);
captureRequest->fmqSettingsSize = 0u;
}

if (hidlSession_3_4 != nullptr) {
captureRequests_3_4[i].physicalCameraSettings.resize(request->num_physcam_settings);
for (size_t j = 0; j < request->num_physcam_settings; j++) {
if (request->physcam_settings != nullptr) {
size_t settingsSize = get_camera_metadata_size(request->physcam_settings[j]);
if (mRequestMetadataQueue != nullptr && mRequestMetadataQueue->write(
reinterpret_cast<const uint8_t*>(request->physcam_settings[j]),
settingsSize)) {
captureRequests_3_4[i].physicalCameraSettings[j].settings.resize(0);
captureRequests_3_4[i].physicalCameraSettings[j].fmqSettingsSize =
settingsSize;
} else {
if (mRequestMetadataQueue != nullptr) {
ALOGW("%s: couldn't utilize fmq, fallback to hwbinder", __FUNCTION__);
}
captureRequests_3_4[i].physicalCameraSettings[j].settings.setToExternal(
reinterpret_cast<uint8_t*>(const_cast<camera_metadata_t*>(
request->physcam_settings[j])),
get_camera_metadata_size(request->physcam_settings[j]));
captureRequests_3_4[i].physicalCameraSettings[j].fmqSettingsSize = 0u;
}
} else {
captureRequests_3_4[i].physicalCameraSettings[j].fmqSettingsSize = 0u;
captureRequests_3_4[i].physicalCameraSettings[j].settings.resize(0);
}
captureRequests_3_4[i].physicalCameraSettings[j].physicalCameraId =
request->physcam_id[j];
}
}
}

hardware::details::return_status err;
auto resultCallback =
[&status, &numRequestProcessed] (auto s, uint32_t n) {
status = s;
*numRequestProcessed = n;
};
if (hidlSession_3_4 != nullptr) {
err = hidlSession_3_4->processCaptureRequest_3_4(captureRequests_3_4, cachesToRemove,
resultCallback);
} else {
err = mHidlSession->processCaptureRequest(captureRequests, cachesToRemove,
resultCallback);
}
if (!err.isOk()) {
ALOGE("%s: Transaction error: %s", __FUNCTION__, err.description().c_str());
status = common::V1_0::Status::CAMERA_DISCONNECTED;
}

if (status == common::V1_0::Status::OK && *numRequestProcessed != batchSize) {
ALOGE("%s: processCaptureRequest returns OK but processed %d/%zu requests",
__FUNCTION__, *numRequestProcessed, batchSize);
status = common::V1_0::Status::INTERNAL_ERROR;
}

res = CameraProviderManager::mapToStatusT(status);
if (res == OK) {
if (mHidlSession->isRemote()) {
// Only close acquire fence FDs when the HIDL transaction succeeds (so the FDs have been
// sent to camera HAL processes)
cleanupNativeHandles(&handlesCreated, /*closeFd*/true);
} else {
// In passthrough mode the FDs are now owned by HAL
cleanupNativeHandles(&handlesCreated);
}
} else {
popInflightBuffers(inflightBuffers);
cleanupNativeHandles(&handlesCreated);
}
return res;
}
3.2.4.7 小结

在创建Request成功之后,便可下发图像采集需求了,这里大致分为两个流程,一个是预览,一个拍照,两者差异主要体现在Camera Service中针对Request获取优先级上,一般拍照的Request优先级高于预览,具体表现是当预览Request在不断下发的时候,来了一次拍照需求,在Camera3Device 的RequestThread线程中,会优先下发此次拍照的Request。这里我们主要梳理下下发拍照request的大体流程:

下发拍照Request到Camera Service,其操作主要是由CameraDevcieClient的submitRequestList方法来实现,在该方法中,会调用Camera3Device的setStreamingRequestList方法,将需求发送到Camera3Device中,而Camera3Device将需求又加入到RequestThread中的RequestQueue中,并唤醒RequestThread线程,在该线程被唤醒后,会从RequestQueue中取出Request,通过之前获取的ICameraDeviceSession代理的processCaptureRequest_3_4方法将需求发送至Provider中,由于谷歌对于processCaptureRequest_3_4的限制,使其必须是非阻塞实现,所以一旦发送成功,便立即返回,在App端便等待这结果的回传。

3.2.5 接收图像结果

针对结果的获取是通过异步实现,主要分别两个部分,一个是事件的回传,一个是数据的回传,而数据中又根据流程的差异主要分为Meta Data和Image Data两个部分,接下来我们详细介绍下:

在下发Request之后,首先从Provider端传来的是Shutter Notify

3.2.5.1 notify
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
void Camera3Device::notify(const camera3_notify_msg *msg) {
ATRACE_CALL();
sp<NotificationListener> listener;
{
Mutex::Autolock l(mOutputLock);
listener = mListener.promote();
}

if (msg == NULL) {
SET_ERR("HAL sent NULL notify message!");
return;
}

switch (msg->type) {
case CAMERA3_MSG_ERROR: {
notifyError(msg->message.error, listener);
break;
}
case CAMERA3_MSG_SHUTTER: {
notifyShutter(msg->message.shutter, listener);
break;
}
default:
SET_ERR("Unknown notify message from HAL: %d",
msg->type);
}
}
3.2.5.2 Camera3Device::notifyShutter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
void Camera3Device::notifyShutter(const camera3_shutter_msg_t &msg,
sp<NotificationListener> listener) {
ATRACE_CALL();
ssize_t idx;

// Set timestamp for the request in the in-flight tracking
// and get the request ID to send upstream
{
Mutex::Autolock l(mInFlightLock);
idx = mInFlightMap.indexOfKey(msg.frame_number);
if (idx >= 0) {
InFlightRequest &r = mInFlightMap.editValueAt(idx);

// Verify ordering of shutter notifications
{
Mutex::Autolock l(mOutputLock);
// TODO: need to track errors for tighter bounds on expected frame number.
if (r.hasInputBuffer) {
if (msg.frame_number < mNextReprocessShutterFrameNumber) {
SET_ERR("Reprocess shutter notification out-of-order. Expected "
"notification for frame %d, got frame %d",
mNextReprocessShutterFrameNumber, msg.frame_number);
return;
}
mNextReprocessShutterFrameNumber = msg.frame_number + 1;
} else if (r.zslCapture && r.stillCapture) {
if (msg.frame_number < mNextZslStillShutterFrameNumber) {
SET_ERR("ZSL still capture shutter notification out-of-order. Expected "
"notification for frame %d, got frame %d",
mNextZslStillShutterFrameNumber, msg.frame_number);
return;
}
mNextZslStillShutterFrameNumber = msg.frame_number + 1;
} else {
if (msg.frame_number < mNextShutterFrameNumber) {
SET_ERR("Shutter notification out-of-order. Expected "
"notification for frame %d, got frame %d",
mNextShutterFrameNumber, msg.frame_number);
return;
}
mNextShutterFrameNumber = msg.frame_number + 1;
}
}

r.shutterTimestamp = msg.timestamp;
if (r.hasCallback) {
ALOGVV("Camera %s: %s: Shutter fired for frame %d (id %d) at %" PRId64,
mId.string(), __FUNCTION__,
msg.frame_number, r.resultExtras.requestId, msg.timestamp);
// Call listener, if any
if (listener != NULL) {
listener->notifyShutter(r.resultExtras, msg.timestamp);
}
// send pending result and buffers
sendCaptureResult(r.pendingMetadata, r.resultExtras,
r.collectedPartialResult, msg.frame_number,
r.hasInputBuffer, r.zslCapture && r.stillCapture,
r.physicalMetadatas);
}
bool timestampIncreasing = !(r.zslCapture || r.hasInputBuffer);
returnOutputBuffers(r.pendingOutputBuffers.array(),
r.pendingOutputBuffers.size(), r.shutterTimestamp, timestampIncreasing,
r.outputSurfaces, r.resultExtras);
r.pendingOutputBuffers.clear();

removeInFlightRequestIfReadyLocked(idx);
}
}
if (idx < 0) {
SET_ERR("Shutter notification for non-existent frame number %d",
msg.frame_number);
}
}
3.2.5.3 CameraDeviceClient::notifyShutter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
void CameraDeviceClient::notifyShutter(const CaptureResultExtras& resultExtras,
nsecs_t timestamp) {
// Thread safe. Don't bother locking.
sp<hardware::camera2::ICameraDeviceCallbacks> remoteCb = getRemoteCallback();
if (remoteCb != 0) {
remoteCb->onCaptureStarted(resultExtras, timestamp);
}
Camera2ClientBase::notifyShutter(resultExtras, timestamp);

for (size_t i = 0; i < mCompositeStreamMap.size(); i++) {
mCompositeStreamMap.valueAt(i)->onShutter(resultExtras, timestamp);
}
}

template <typename TClientBase>
void Camera2ClientBase<TClientBase>::notifyShutter(const CaptureResultExtras& resultExtras,
nsecs_t timestamp) {
(void)resultExtras;
(void)timestamp;

if (!mDeviceActive) {
getCameraService()->updateProxyDeviceState(
hardware::ICameraServiceProxy::CAMERA_STATE_ACTIVE, TClientBase::mCameraIdStr,
TClientBase::mCameraFacing, TClientBase::mClientPackageName,
((mApi1CameraId < 0) ? hardware::ICameraServiceProxy::CAMERA_API_LEVEL_2 :
hardware::ICameraServiceProxy::CAMERA_API_LEVEL_1));
}
mDeviceActive = true;

ALOGV("%s: Shutter notification for request id %" PRId32 " at time %" PRId64,
__FUNCTION__, resultExtras.requestId, timestamp);
}
3.2.5.4 sendCaptureResult
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata,
CaptureResultExtras &resultExtras,
CameraMetadata &collectedPartialResult,
uint32_t frameNumber,
bool reprocess, bool zslStillCapture,
const std::vector<PhysicalCaptureResultInfo>& physicalMetadatas) {
ATRACE_CALL();
if (pendingMetadata.isEmpty())
return;

Mutex::Autolock l(mOutputLock);

// TODO: need to track errors for tighter bounds on expected frame number
if (reprocess) {
if (frameNumber < mNextReprocessResultFrameNumber) {
SET_ERR("Out-of-order reprocess capture result metadata submitted! "
"(got frame number %d, expecting %d)",
frameNumber, mNextReprocessResultFrameNumber);
return;
}
mNextReprocessResultFrameNumber = frameNumber + 1;
} else if (zslStillCapture) {
if (frameNumber < mNextZslStillResultFrameNumber) {
SET_ERR("Out-of-order ZSL still capture result metadata submitted! "
"(got frame number %d, expecting %d)",
frameNumber, mNextZslStillResultFrameNumber);
return;
}
mNextZslStillResultFrameNumber = frameNumber + 1;
} else {
if (frameNumber < mNextResultFrameNumber) {
SET_ERR("Out-of-order capture result metadata submitted! "
"(got frame number %d, expecting %d)",
frameNumber, mNextResultFrameNumber);
return;
}
mNextResultFrameNumber = frameNumber + 1;
}

CaptureResult captureResult;
captureResult.mResultExtras = resultExtras;
captureResult.mMetadata = pendingMetadata;
captureResult.mPhysicalMetadatas = physicalMetadatas;

// Append any previous partials to form a complete result
if (mUsePartialResult && !collectedPartialResult.isEmpty()) {
captureResult.mMetadata.append(collectedPartialResult);
}

captureResult.mMetadata.sort();

// Check that there's a timestamp in the result metadata
camera_metadata_entry timestamp = captureResult.mMetadata.find(ANDROID_SENSOR_TIMESTAMP);
if (timestamp.count == 0) {
SET_ERR("No timestamp provided by HAL for frame %d!",
frameNumber);
return;
}
nsecs_t sensorTimestamp = timestamp.data.i64[0];

for (auto& physicalMetadata : captureResult.mPhysicalMetadatas) {
camera_metadata_entry timestamp =
physicalMetadata.mPhysicalCameraMetadata.find(ANDROID_SENSOR_TIMESTAMP);
if (timestamp.count == 0) {
SET_ERR("No timestamp provided by HAL for physical camera %s frame %d!",
String8(physicalMetadata.mPhysicalCameraId).c_str(), frameNumber);
return;
}
}

// Fix up some result metadata to account for HAL-level distortion correction
status_t res =
mDistortionMappers[mId.c_str()].correctCaptureResult(&captureResult.mMetadata);
if (res != OK) {
SET_ERR("Unable to correct capture result metadata for frame %d: %s (%d)",
frameNumber, strerror(res), res);
return;
}
for (auto& physicalMetadata : captureResult.mPhysicalMetadatas) {
String8 cameraId8(physicalMetadata.mPhysicalCameraId);
if (mDistortionMappers.find(cameraId8.c_str()) == mDistortionMappers.end()) {
continue;
}
res = mDistortionMappers[cameraId8.c_str()].correctCaptureResult(
&physicalMetadata.mPhysicalCameraMetadata);
if (res != OK) {
SET_ERR("Unable to correct physical capture result metadata for frame %d: %s (%d)",
frameNumber, strerror(res), res);
return;
}
}

// Fix up result metadata for monochrome camera.
res = fixupMonochromeTags(mDeviceInfo, captureResult.mMetadata);
if (res != OK) {
SET_ERR("Failed to override result metadata: %s (%d)", strerror(-res), res);
return;
}
for (auto& physicalMetadata : captureResult.mPhysicalMetadatas) {
String8 cameraId8(physicalMetadata.mPhysicalCameraId);
res = fixupMonochromeTags(mPhysicalDeviceInfoMap.at(cameraId8.c_str()),
physicalMetadata.mPhysicalCameraMetadata);
if (res != OK) {
SET_ERR("Failed to override result metadata: %s (%d)", strerror(-res), res);
return;
}
}

std::unordered_map<std::string, CameraMetadata> monitoredPhysicalMetadata;
for (auto& m : physicalMetadatas) {
monitoredPhysicalMetadata.emplace(String8(m.mPhysicalCameraId).string(),
CameraMetadata(m.mPhysicalCameraMetadata));
}
mTagMonitor.monitorMetadata(TagMonitor::RESULT,
frameNumber, sensorTimestamp, captureResult.mMetadata,
monitoredPhysicalMetadata);

insertResultLocked(&captureResult, frameNumber);
}
3.2.5.5 insertResultLocked
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
void Camera3Device::insertResultLocked(CaptureResult *result,
uint32_t frameNumber) {
if (result == nullptr) return;

camera_metadata_t *meta = const_cast<camera_metadata_t *>(
result->mMetadata.getAndLock());
set_camera_metadata_vendor_id(meta, mVendorTagId);
result->mMetadata.unlock(meta);

if (result->mMetadata.update(ANDROID_REQUEST_FRAME_COUNT,
(int32_t*)&frameNumber, 1) != OK) {
SET_ERR("Failed to set frame number %d in metadata", frameNumber);
return;
}

if (result->mMetadata.update(ANDROID_REQUEST_ID, &result->mResultExtras.requestId, 1) != OK) {
SET_ERR("Failed to set request ID in metadata for frame %d", frameNumber);
return;
}

// Update vendor tag id for physical metadata
for (auto& physicalMetadata : result->mPhysicalMetadatas) {
camera_metadata_t *pmeta = const_cast<camera_metadata_t *>(
physicalMetadata.mPhysicalCameraMetadata.getAndLock());
set_camera_metadata_vendor_id(pmeta, mVendorTagId);
physicalMetadata.mPhysicalCameraMetadata.unlock(pmeta);
}

String8 tagString = String8("org.qti.camera.customControl.multicamInfo");
camera_metadata_entry entry;
getMetaDataFromResults(result->mMetadata, tagString, &entry);
if (entry.count != 0) {
CHIMULTICAMINFO *multiCam;
multiCam = (CHIMULTICAMINFO *)(entry.data.u8);
ALOGVV("%s: add_metadata_log, get data succeed, data count: %zu, data: %d, %d",
__FUNCTION__, entry.count, multiCam->camraId, multiCam->testDataSet);
}

tagString = String8("org.qti.camera.cameraSnInfo.sensorSnNum");
getMetaDataFromResults(result->mMetadata, tagString, &entry);
if (entry.count != 0) {
int64_t* cameraSnNumber;
cameraSnNumber = reinterpret_cast<int64_t*>(entry.data.u8);
ALOGVV("%s: add_metadata_log, get data succeed, data count: %zu, data: %" PRId64 "",
__FUNCTION__, entry.count, *cameraSnNumber);
}

// Valid result, insert into queue
List<CaptureResult>::iterator queuedResult =
mResultQueue.insert(mResultQueue.end(), CaptureResult(*result));
ALOGVV("%s: result requestId = %" PRId32 ", frameNumber = %" PRId64
", burstId = %" PRId32, __FUNCTION__,
queuedResult->mResultExtras.requestId,
queuedResult->mResultExtras.frameNumber,
queuedResult->mResultExtras.burstId);

mResultSignal.signal();
}
3.2.5.6 processNewFrames
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
bool FrameProcessorBase::threadLoop() {
status_t res;

sp<CameraDeviceBase> device;
{
device = mDevice.promote();
if (device == 0) return false;
}

res = device->waitForNextFrame(kWaitDuration);
if (res == OK) {
processNewFrames(device);
} else if (res != TIMED_OUT) {
ALOGE("FrameProcessorBase: Error waiting for new "
"frames: %s (%d)", strerror(-res), res);
}

return true;
}

void FrameProcessorBase::processNewFrames(const sp<CameraDeviceBase> &device) {
status_t res;
ATRACE_CALL();
CaptureResult result;

ALOGV("%s: Camera %s: Process new frames", __FUNCTION__, device->getId().string());

while ( (res = device->getNextResult(&result)) == OK) {

// TODO: instead of getting frame number from metadata, we should read
// this from result.mResultExtras when CameraDeviceBase interface is fixed.
camera_metadata_entry_t entry;

entry = result.mMetadata.find(ANDROID_REQUEST_FRAME_COUNT);
if (entry.count == 0) {
ALOGE("%s: Camera %s: Error reading frame number",
__FUNCTION__, device->getId().string());
break;
}
ATRACE_INT("cam2_frame", entry.data.i32[0]);

if (!processSingleFrame(result, device)) {
break;
}

if (!result.mMetadata.isEmpty()) {
Mutex::Autolock al(mLastFrameMutex);
mLastFrame.acquire(result.mMetadata);

mLastPhysicalFrames = std::move(result.mPhysicalMetadatas);
}
}
if (res != NOT_ENOUGH_DATA) {
ALOGE("%s: Camera %s: Error getting next frame: %s (%d)",
__FUNCTION__, device->getId().string(), strerror(-res), res);
return;
}

return;
}
3.2.5.7 processSingleFrame
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
bool FrameProcessorBase::processSingleFrame(CaptureResult &result,
const sp<CameraDeviceBase> &device) {
ALOGV("%s: Camera %s: Process single frame (is empty? %d)",
__FUNCTION__, device->getId().string(), result.mMetadata.isEmpty());
return processListeners(result, device) == OK;
}

status_t FrameProcessorBase::processListeners(const CaptureResult &result,
const sp<CameraDeviceBase> &device) {
ATRACE_CALL();

camera_metadata_ro_entry_t entry;

// Check if this result is partial.
bool isPartialResult =
result.mResultExtras.partialResultCount < mNumPartialResults;

// TODO: instead of getting requestID from CameraMetadata, we should get it
// from CaptureResultExtras. This will require changing Camera2Device.
// Currently Camera2Device uses MetadataQueue to store results, which does not
// include CaptureResultExtras.
entry = result.mMetadata.find(ANDROID_REQUEST_ID);
if (entry.count == 0) {
ALOGE("%s: Camera %s: Error reading frame id", __FUNCTION__, device->getId().string());
return BAD_VALUE;
}
int32_t requestId = entry.data.i32[0];

List<sp<FilteredListener> > listeners;
{
Mutex::Autolock l(mInputMutex);

List<RangeListener>::iterator item = mRangeListeners.begin();
// Don't deliver partial results to listeners that don't want them
while (item != mRangeListeners.end()) {
if (requestId >= item->minId && requestId < item->maxId &&
(!isPartialResult || item->sendPartials)) {
sp<FilteredListener> listener = item->listener.promote();
if (listener == 0) {
item = mRangeListeners.erase(item);
continue;
} else {
listeners.push_back(listener);
}
}
item++;
}
}
ALOGV("%s: Camera %s: Got %zu range listeners out of %zu", __FUNCTION__,
device->getId().string(), listeners.size(), mRangeListeners.size());

List<sp<FilteredListener> >::iterator item = listeners.begin();
for (; item != listeners.end(); item++) {
(*item)->onResultAvailable(result);
}
return OK;
}
3.2.5.8 returnOutputBuffers
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
void Camera3Device::returnOutputBuffers(
const camera3_stream_buffer_t *outputBuffers, size_t numBuffers,
nsecs_t timestamp, bool timestampIncreasing,
const SurfaceMap& outputSurfaces,
const CaptureResultExtras &inResultExtras) {

for (size_t i = 0; i < numBuffers; i++)
{
if (outputBuffers[i].buffer == nullptr) {
if (!mUseHalBufManager) {
// With HAL buffer management API, HAL sometimes will have to return buffers that
// has not got a output buffer handle filled yet. This is though illegal if HAL
// buffer management API is not being used.
ALOGE("%s: cannot return a null buffer!", __FUNCTION__);
}
continue;
}

Camera3StreamInterface *stream = Camera3Stream::cast(outputBuffers[i].stream);
int streamId = stream->getId();
const auto& it = outputSurfaces.find(streamId);
status_t res = OK;
if (it != outputSurfaces.end()) {
res = stream->returnBuffer(
outputBuffers[i], timestamp, timestampIncreasing, it->second,
inResultExtras.frameNumber);
} else {
res = stream->returnBuffer(
outputBuffers[i], timestamp, timestampIncreasing, std::vector<size_t> (),
inResultExtras.frameNumber);
}

// Note: stream may be deallocated at this point, if this buffer was
// the last reference to it.
if (res == NO_INIT || res == DEAD_OBJECT) {
ALOGV("Can't return buffer to its stream: %s (%d)", strerror(-res), res);
} else if (res != OK) {
ALOGE("Can't return buffer to its stream: %s (%d)", strerror(-res), res);
}

// Long processing consumers can cause returnBuffer timeout for shared stream
// If that happens, cancel the buffer and send a buffer error to client
if (it != outputSurfaces.end() && res == TIMED_OUT &&
outputBuffers[i].status == CAMERA3_BUFFER_STATUS_OK) {
// cancel the buffer
camera3_stream_buffer_t sb = outputBuffers[i];
sb.status = CAMERA3_BUFFER_STATUS_ERROR;
stream->returnBuffer(sb, /*timestamp*/0, timestampIncreasing, std::vector<size_t> (),
inResultExtras.frameNumber);

// notify client buffer error
sp<NotificationListener> listener;
{
Mutex::Autolock l(mOutputLock);
listener = mListener.promote();
}

if (listener != nullptr) {
CaptureResultExtras extras = inResultExtras;
extras.errorStreamId = streamId;
listener->notifyError(
hardware::camera2::ICameraDeviceCallbacks::ERROR_CAMERA_BUFFER,
extras);
}
}
}
}
3.2.5.9 小结

在下发Request之后,首先从Provider端传来的是Shutter Notify,因为之前已经将Camera3Device作为ICameraDeviceCallback的实现传入Provider中,所以此时会调用Camera3Device的notify方法将事件传入Camera Service中,紧接着通过层层调用,将事件通过CameraDeviceClient的notifyShutter方法发送到CameraDeviceClient中,之后又通过打开相机设备时传入的Framework的CameraDeviceCallbacks接口的onCaptureStarted方法将事件最终传入Framework,进而给到App端。

在Shutter事件上报完成之后,当一旦有Meta Data生成,Camera Provider便会通过ICameraDeviceCallback的processCaptureResult_3_4方法将数据给到Camera Service,而该接口的实现对应的是Camera3Device的processCaptureResult_3_4方法,在该方法会通过层层调用,调用sendCaptureResult方法将Result放入一个mResultQueue中,并且通知FrameProcessorBase的线程去取出Result,并且将其发送至CameraDeviceClient中,之后通过内部的CameraDeviceCallbacks远程代理的onResultReceived方法将结果上传至Framework层,进而给到App中进行处理。

随后Image Data前期也会按照类似的流程走到Camera3Device中,但是会通过调用returnOutputBuffers方法将数据给到Camera3OutputStream中,而该Stream中会通过BufferQueue这一生产者消费者模式中的生产者的queue方法通知消费者对该buffer进行消费,而消费者正是App端的诸如ImageReader等拥有Surface的类,最后App便可以将图像数据取出进行后期处理了。

四、总结

初代Android相机框架中,Camera Service层就已经存在了,主要用于向上与Camera Framework保持低耦合关联,承接其图像请求,内部封装了Camera Hal Module模块,通过HAL接口对其进行控制,所以该层从一开始就是谷歌按照分层思想,将硬件抽象层抽离出来放入Service中进行管理,这样的好处显而易见,将平台厂商实现的硬件抽象层与系统层解耦,独立进行控制。之后随着谷歌将平台厂商的实现放入vendor分区中,彻底将系统与平台厂商在系统分区上保持了隔离,此时,谷歌便顺势将Camera HAL Moudle从Camera Service中解耦出来放到了vendor分区下的独立进程Camera Provider中,所以之后,Camera Service 的职责便是承接来自Camera Framework的请求,之后将请求转发至Camera Provider中,作为一个中转站的角色存在在系统中。